In one of the first instances of total algorithmic transparency from a social platform, Twitter has removed its image cropping algorithm and created new controls for users over how images appear, based on newly published findings of gender and skin color biases.
Members of the Twitter Machine Learning, Ethics and Algorithmic Transparency team conducted a research experiment to review how the image-cropping algorithm chose points of focus in pictures (called "saliency") and found occasional instances of bias in cropping based on the gender and skin color of people in photos. The researchers found that there was an 8% difference in favor of women from demographic parity, a 4% difference in favor of white individuals, and a 7% difference in favor of white women in comparisons of black and white women, META lead Rumman Chowdhury wrote in a blog post Wednesday afternoon. The research found other biases as well. (The company's analysis used subjective applications of "Black," "white", "male", and "female", not the total spectrum of skin tone and racial and ethnic identities).
After analyzing the tradeoffs between the consistency of the algorithm and its potential for harm, Twitter opted away from speed — an unusual, if unheard of, policy change for social networks that usually obsess with reducing friction — and instead designed a new way to share vertical, uncropped images. "This update also includes a true preview of the image in the Tweet composer field, so Tweet authors know how their Tweets will look before they publish. This release reduces our dependency on ML for a function that we agree is best performed by people using our products," Chowdhury wrote.
Twitter first introduced the "saliency" cropping algorithm in 2018 in order to make timeline photos appear more consistent, training the algorithm to crop based on how the human eye sees pictures. The cropping almost immediately drew criticism by people who felt that from their own anecdotal experience, the crops often focused on people with light skin and on women while cutting out others, which could perpetuate both demographic biases and the objectifying "male gaze" toward women. While Twitter at first said the algorithm had been tested for biases before it was released, the company then apologized in September 2020 to the users who were sharing their own experiences with bias on the platform and committed to further assessing the algorithm.
Over the last year, Twitter has heavily emphasized user choice in its public statements about conversational health and safety and in its descriptions of how the company is designing future products. The team did the same for its analysis of the cropping algorithm, asking whether the cropping algorithm caused harm by taking away people's ability to make their own decisions. "Not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people. They should be able to decide for themselves what part of an image is the most important and what part is the focal point of their Tweet," Chowdhury wrote.
The company also shared a paper published today detailing how the researchers found the biases within the algorithms and how they analyzed the possible harm that could be caused by them, written by Kyra Yee and Tao Tantipongpipat from the META team and Shubhanshu Mishra from the Content Understanding Research team. "The use of this model poses concerns that Twitter's cropping system favors cropping light-skinned over dark-skinned individuals and favors cropping women's bodies over their heads," they wrote in the paper's conclusion.
Chowdhury, a widely-respected leader in the field of algorithmic accountability, was hired in February to lead the new META team after months of widespread criticism toward large tech companies and social platforms for their failures to analyze possible biases and discrimination in their algorithms. She is just one of Twitter's many recent hires of prominent and well-respected leaders in tech ethics and accountability.