In the midst of the worldwide concern and protest against racism following the death shocking of George Floyd, the debate is now shifting to the question of discrimination on the part of the algorithms. The GAFAM but also Twitter and other platforms will appear and show a new “solidarity” and a “level of interest” in this global movement on an unprecedented scale. But how the algorithms can they be racist? At that responsibility and how to combat this racism 2.0? We try to provide some answers here.
The algorithms, far from being neutral, reproduce the stereotypes of their creators
An artificial Intelligence is now able to grant a bank loan by selecting the profiles of the most profitable and less risky, to select who will be urgently admitted to the hospital in assessing the gravity of the condition of the patient, or to directly identify on their face the needs as consumers. Personal data (age, sex, sentiment or ethnic origin) may be taken into account to develop recommendations or automatic diagnostics.
The research confirms that the databases and the stereotypes of the humans who design them, can lead to discrimination, and the CNIL has warned recently of the need to pay attention to the dangers of discrimination algorithms.
Companies such as IBM or Amazon: facilitators of technology racist?
Companies like Amazon, Apple, Facebook, Google, Microsoft and Twitter often argue that they make the world a better place through their technology tools. A statement very wobbly, if it is discovered that some of their technologies convey the racial discrimination.
In 2015, Photos Google had made a scandal by confusing a black person with a gorilla. But this is not the only danger of facial recognition. IBM is offering its facial-recognition technology for the monitoring of mass, or racial profiling. A result of the movement #blacklivesmatter, Arvind Krishna, the CEO of IBM, has publicly expressed his reluctance to continue to develop this technology for american police. In a letter to Congress, he says that the time has come to question the deployment of the facial recognition, and to begin a national dialogue on the issue to see whether and how the facial-recognition technology should be used for “law enforcement authorities”. IBM also requests a specific legal framework for the use of these technologies.
Amazon, for its part, announced Wednesday, June 10 that its facial recognition software Rekognition will be banned for a year for the police may not be able to use it in the context of the events of anti-racism in the United States.
In a sign of solidarity with the movement, anti-racism, Facebook announced improvements to the moderation of content that is hateful. Others, such as Grindr, also intend to remove a filter that is ethnic in the next version of their application to meetings of LGBT people. “We will not remain silent and we will not be inactive,” said Grindr in a message published through his official Twitter account.
The positioning of the GAFAM, finally a good news?
In addition to a few strategic decisions in support of the movement, the giants of the net and their CEOs have shown their solidarity with the protests against racist remarks on social networks and take the opportunity to remember their commitment to diversity within their companies.
However, why do they only now and have not said anything to cases of racial violence that have already struck the public opinion in the past? The case of George Floyd is not a first. Before him, in 2014 the injustice against Eric Garner and Michael Brown had also done the round-the-world, but the answer of the leaders of the Internet has made to wait.
These businesses do not or may not be manifested in 2014, but, the fact that they have done so in 2020 is an encouraging sign of progress. However, a statement of solidarity and some donations should only be a beginning. According to Jay Peters, a journalist with The Verge, the anti-racism on the part of the Silicon Valley is only a first step, there is still a long way to go: “there is the recognition of racism, and then there’s the continuous work to be anti-racist.”
Changing the algorithms is easier than to change people
According to Sendhil Mullainathan, professor of economics at Harvard University, the discrimination algorithm can be more easily discovered and more easily corrected than the discrimination of humans. In fact, with proper regulation, the algorithms can even help to reduce discrimination.
A regulatory agency with auditors highly qualified to address the data sets to certify that they will not cause bias.
Of open data to deal with the bias algorithmic
The EU has also ruled in a recent report to alert the biased data may create algorithms to be biased. For example, algorithms of facial recognition, trained on data sets containing more faces west that people of ethnic origin more diverse, could make the analysis biased. To cope with the discrimination algorithms it is essential to use representative data.
For this, the opening of data can play a very important role. In addition, the EU recalls that the open data can also, on the contrary, to erase any form of gender bias or racial.