Difficult to express themselves on social networks, when we know that we can be attacked or even harassed by anyone. For the users to have less fear of gratuitous insults, a young niçois created a few years ago the application Bodyguard, to help eliminate the aggressive comments and hateful.
An artificial intelligence that is able to identify the content that is hateful
In a situation proved to be of cyber-bullying, or simply when there is a fear of exposing themselves publicly on the internet, the application Bodyguard can be very useful, because it removes the toxicity of the conversations. To be successful in this feat, his algorithm was trained to recognize the different types of hate comments: insults, threats, mockery, homophobia, sexual harassment, or morale. Even with spelling errors, abbreviations or asterisks which imply the word without the write can be detected by the artificial intelligence.
Specifically, the user must give the Bodyguard the permission to connect to its social network accounts, to be able to manage the comments under the Youtube videos for example, or block users insulting on Twitter. It is possible to adjust several degrees of “tolerance”, and to specify, for example, if you do not want to see it go no message in racist, homophobic or sexual harassment. The algorithm can therefore pass on the jokes or teasing, but censor the content clearly haineaux.
Bodyguard respects the freedom of expression
The application can hide the comments in a timeline, block users, but it can’t delete the account of another user, and does not want to, because it would pose problems of freedom of expression. The management of the accounts of users who make incorrect use of their freedom of expression, according to the creator of the Bodyguard, Charles Cohen, is the total responsibility of the platforms themselves.
The application is currently compatible with Twitch, Facebook, Youtube, and Twitter but it is necessary that the platforms are open to developers to be able to manage the comments. For example, it is not the case of Snapchat or TikTok. In the future, an application family could see the light of day to send notifications to parents in the case of detection of hate comments in the timeline of their children.
The social networks are struggling to manage themselves the content that is hateful
To Charles Cohen, the social networks are struggling to manage their own content because of the technology they use to do so. Then they use the learning machine, Bodyguard has a functioning craft. Its filter works by analyzing the words or groups of words, potentially aggressive, but also the context around the message, as well as the profiles and the relationship between the author and the recipient of the comment. He then decides to remove, obscure, block, or not. The training of the algorithm in a manual way which took about 2 years, the time necessary to study and “tag” it is precisely the most of possible configurations.
This lack of manual work and detailed for moderating comments could be expensive for social networks, who live a very difficult phase, in this period of claims against the harassment and racism. Brands (Coca-Cola, Disney or Unilever), to show their disapproval of the lack of action on the part of the digital platforms to block the content that is hateful, boycotted some platforms, removing their advertisements. This has weakened considerably Twitter, with a revenue loss of 19%, which pushes the company to consider a paid service.
In France, the regulation of hate speech online, especially on minors, is in the heart of the missions of the observatory launched on July 7, 2020.
Google, Facebook, Twitter, Twitch, Snapchat, and TikTok, will participate in the observatory to analyse and quantify the phenomenon of hate speech online, to improve the understanding of springs and dynamic, and encourage the sharing of information and feedback between the stakeholders.