Twitter offers a reward to anyone who detects a bias in its algorithm

Twitter offers a reward to anyone who detects a bias in its algorithm

Spread the love

Twitter offers a reward to anyone who detects a bias in its algorithm

Twitter launched a specific bounty program to improve its image cropping algorithm, with which he hopes that the ‘hackers’ who sign up for this competition contribute to knowing the potential damages they generate.

Last year, the platform starred in a controversy when it discovered that its image cropping algorithm repeatedly eliminated black or dark skin tone faces generally in favor of white people’s faces.

For this reason, the company offered rewards ranging between $ 500 and $ 3500Dollars to anyone who finds evidence of harmful bias in their algorithms.

In May of this year, he acknowledged that this bias was true and posted his code on GitHub so that developers could help improve this problem. For this reason, the company wants remove as many biases as possible with a contest.

“We want to take this work one step further by inviting and incentivizing the community to help identify the potential harms of this algorithm beyond what we identify ourselves,” wrote Twitter.

The competition, different from the rewards program for security breaches, will give participants access to both the model and the code used by the platform to cut the image of the ‘tweets’ when it exceeds certain dimensions.

The company urges participants to demonstrate the potential damage that the clipping algorithm can introduce, whether intentional or unintentional, with special attention to damage that may be directed at marginalized communities.

Twitter will reveal the winners during DEFCON, on August 9, as the company has reported on the HackerOne. You will recognize them with financial prizes. First place will win US $ 3,500, the second US $ 1,000, the third US $ 500 and the most innovative US $ 1,000.

Prominence algorithm

The algorithm in question, known as the prominence algorithm, generates a previous image and for this is based on the way the human eye sees, prioritizing what may be most important, and estimates what a person might want to see first within an image, as Twitter has already explained on other occasions.

The prominence algorithm seeks to prove your innocence. AFP photo

However, an experiment conducted by crypto and infrastructure engineer Tony Arcieri showed that the algorithm was biased, and that in its automatic cropping it harmed black people and women, when two images faced each other.

In May, the company shared the results of the study it had carried out, the conclusions of which showed that the algorithm is within what is consider fairness in terms of machine learning, although it shows a slight bias in favor of white people and women.

New functions

On July 14, Twitter announced in a blog post that its famous ‘Fleets’ would no longer be available as of August 3 due to its low popularity on the social network.

On the same occasion, they announced that they had started working on new functions to add them to the social network, since its quarterly results have registered an increase in income due to the good performance of advertising and the increase in the number of users.

SL