
Sentiment Help
SocialTrase analyzes each post from your subject's social media for both risks as well as sentiment. These factors are then combined to arrive at a social media score. Our sentiment algorithm is a rule-based tool specifically attuned to sentiments expressed in social media. We incorporate a lexicon (list of features, e.g., words) labeled according to their semantic orientation.
​
Each post is given a sentiment value from -1 to 1. You can view the post’s sentiment (positive, neutral, or negative) by the icon representation on the post within the subject's POSTS view.
​​​​
The report also tracks the sentiment over time, which averages the post sentiment over a dynamic range to understand better how sentiment trends over time. A red sentiment line means that your subject's posts are, on average, negative, whereas a green line indicates your subject's posts are positive overall. Below this chart on the dashboard is the sentiment makeup with you with the total number of posts as well as how many were positive, neutral, or negative.
​
Sentiment analysis compliments the machine learning algorithm that looks at specific risk factors for each post. It can give you a broad understanding of how individuals express themselves on Social Media. Note that Sentiment has a lower weighting than risk classifications in the overall score.
Behaviors Identified
Image Analysis
We perform image content analysis to identify specific risks such as explicit/racy, drugs, alcohol, and violent images. In addition, we will extract text from the images, such as memes, and analyze the text in the same way all text on the posts are analyzed. If an image contains someone holding a sign, we will also attempt to extract the text from the sign. We will analyze text within images across the 13 text-based risk classifications as well as keyword matches.
Keywords can also be used to identify specific content in the images. In this way, you can extend the capability of the default image-based risk classifications to virtually unlimited use/cases. For example, suppose you were investigating a worker’s compensation fraud case whereby the subject claims they were incapacitated from an accident. In that case, you may want to know if they were running/jogging or working out from an image they posted online. In this case, you could enter a keyword such as running and/or gym, and we would flag any post with images containing this content. We can identify general objects, locations, activities, animal species, products, etc.
We support over 10,000 custom labels for image content analysis. Keep in mind labels are supported in English only.
Insightful
Our machine learning algorithms analyze your subject's posts and images and can identify behaviors across 14 different categories. The following behavioral attributes are identified as part of the social media screening process. When confidence levels reach a minimum threshold, We will flag the post based on the highest confidence. We also will flag based on image content to include custom keywords you enter.
We give the analyst the tools they need to quickly assess the candidate and their posts including intuitive navigation of the risk factors and sentiment. Correct false flags, redact images, tag posts and search are just some of the features built-into the platform.
Risk Classifications*
The following are the possible risk classifications of a post.
HATE SPEECH
Derogatory, abusive, and/or threatening statements toward a specific group of people typically on the basis of race, religion, or sexual orientation.
SELF HARM
Indications of wanting to hurt oneself or take one's own life intentionally. This could also be mentions of suicide or suicidal behavior in others.
SEXUAL IMPROPRIETY
Includes expressions relating to sexual misconduct that could be considered sexually demeaning or sexual harassment.
TERRORISM/EXTREMISM
Statements expressing radical viewpoints typically related to politics or religion and considered far outside the mainstream attitudes.
THREATS OF VIOLENCE
An intent to inflict harm or loss of another person's life.
TOXIC LANGUAGE
A way of communicating that is considered to be rude, disrespectful, blaming, labeling, or using guilt.
​
NARCOTICS
Statements relating to drugs and alcohol use including slang words, street names, and phrases.
​
DRUG RELATED IMAGES
Images of pills, syringes, paraphernalia, and alcohol. It may include smoking, drinking, and injections.
VIOLENT IMAGES
Images of disfigurations, open wounds, burns, crime scenes and guns/weapons.
EXPLICIT/RACY IMAGES
Images of explicit nudity, adult content, and pornographic content.
KEYWORDS
Flagged posts based on matches (text and images) to custom keywords provided. Keywords can be designated negative, positive, or neutral.
POLITICAL SPEECH
Statements considered relating to politics or governmental affairs. This could include politicians, policies, or the political process. These often focus on specific issues such as abortion, environmental, immigration, etc.
INSULTS AND BULLYING
Name-calling or derogatory statements toward an individual about their physical characteristics such as weight, height, looks, intelligence, etc.