SCOTT: Social Sentinel is valuable tool for UPD

The software is a sensible way to improve student safety.

op-SocialFeed-CourtesyPexels

To improve safety at U.Va., the University Police Department has started using Social Sentinel, a software that scans social media posts for potential threats.

Pixabay

To improve safety at the University, the University Police Department has started using Social Sentinel, a software that scans social media posts for potential threats. It searches public posts for thousands of keywords that could pose a threat to the University, including “kill,” “die” or “shoot.” Any post containing keywords with reference to the University is flagged and sent to the police for further examination. 

When I first saw headlines about “social media surveillance” at the University, I was worried about potential invasions of privacy. Audrey Fahlberg, a viewpoint columnist for The Cavalier Daily, recently addressed this and other concerns. In addition to arguing the software invades privacy, she contends it is ineffective, can be used to target marginalized groups and is a waste of money. However, after learning more details of this specific software, I believe these arguments largely fall short. 

Fahlberg’s first claim is that the software is ineffective. In her view, the software is inept because it cannot distinguish between true threats and posts that are just hyperbolic or idiomatic, so it will fail to produce valuable results. While it is true that the software cannot determine the intent of the post, that is not its function. The software is just a tool that makes law enforcement aware of potentially threatening posts.The police subsequently inspect those posts to see if they warrant any action. What is important in determining efficacy is whether the tool serves its primary purpose, not if it has limitations. This software is meant to focus the attention of the police on posts that are most likely to be threats, and that is what it does. Moreover, it does so without relying on someone personally reporting the threat. Therefore, it is hard to see how it is ineffective in doing its job.

Furthermore, in cases like that of the Parkland shooter, this software gives police an edge not only in detecting threats against the University, but also in addressing them. However, Fahlberg disagrees, stating, “But even in the case that a social media post is determined to be a legitimate threat, it is unclear how police can and should respond.” However, in the 2003 case Virginia v. Black, the U.S. Supreme Court held that, “a State [may] choose to prohibit only those forms of intimidation that are most likely to inspire fear of bodily harm.” Thus, under current legal precedent, it is permissible to ban intimidation that would doubtlessly inspire fear of bodily harm. In fact, just two months ago, a middle school student from Virginia Beach was arrested for making threats on social media against their school. Consequently, to say that law enforcement would be powerless to deal with a legitimate threat against the University is not accurate. 

Next, after citing the complicated nature of internet privacy and acknowledging the police can lawfully read public posts, Fahlberg argues, “just because police can search this information doesn’t mean they should. Even ordinary citizens can be targeted by these technologies.” She seems to believe that police should not read public posts because it could somehow invade the privacy of — or in some other way negatively affect — “ordinary citizens.” It is unclear how this is true since users willingly post on public accounts when alternatives, like private accounts, private messaging or not posting, are easily available. It is even more difficult to see how reading a public Facebook post would be different from reading any other publically available information such as bulletin boards, signs or even this column. Ultimately, the fact that only public posts will be scanned should assuage any privacy concerns.

Fahlberg then claims this software could be used to profile certain political or religious groups. To support this, she cites the use of a similar software used by the Boston Police Department who flagged posts associated with Black Lives Matter and Islamist extremism. However, Gary Margolis, the CEO of Social Sentinel, has said that police are not able to limit searches to particular people or organizations with this software. 

Finally, Fahlberg asserts that the software is a waste of money and that, “the University administration is more concerned with fabricating a facade of safety than actually adopting measures that have a proven record of fighting crime.” The only evidence for her claim is University President Teresa Sullivan saying, “I hope [Social Sentinel] will improve both the perceived and the actual safety of students.” This quote, however, does not mean that the administration only took appearances into account, rather that she hopes it actually makes students safer. Additionally, Fahlberg fails to provide any example of measures that would be more effective. If there is a measure that would keep students just as safe for a smaller cost, then we should use it, but in lieu of that, Social Sentinel is an effective alternative.

All in all, Social Sentinel is a tool that helps UPD do its job more effectively by increasing the police’s awareness of potential threats. Moreover, it does so without threatening students’ privacy rights. Hence, this software is a sensible way to improve student safety in the modern world.

Gavin Scott is an Opinion columnist for The Cavalier Daily. He can be reached at opinion@cavalierdaily.com.



related stories