Twitter is studying the biases within its algorithms as part of a new effort to try to understand how its machine learning tools can cause unintended consequences, and the company says it plans to publicly share some of its findings.
The company plans to prioritize what it's calling the pillars of "responsible ML," which include "taking responsibility for our algorithmic decisions, equity and fairness of outcomes, transparency about our decisions and how we arrived at them, and enabling agency and algorithmic choice," according to a Wednesday announcement outlining this approach. Rumman Chowdhury, the leader of Twitter's ML Ethics, Transparency and Accountability team, co-wrote the statement with Jutta Williams.
The company will eventually provide some kind of public analyses for at least three different areas of study: a gender and racial bias analysis of the image cropping algorithm, a fairness assessment of timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies across seven countries, according to the post. Twitter claims that it will then use those findings to prioritize "tweaks" to its algorithms to address the most pressing issues, or even larger changes to its product, like removing an algorithm (such as the image cropper ML) entirely.
The idea of creating "agency and algorithmic choice" based on research and transparency is not new to Twitter. In its announcements about product developments and user safety, Twitter has over the last year heavily emphasized the idea of "agency," saying that it wants to give users more choices about their experience on Twitter. "The point is not to make the entire world a safe space: That's not possible. The point is to empower people and communities to have the tools to heal harm themselves and to prevent harm to themselves and put them in control," the head of Twitter's product for conversational safety team, Christine Su, told Protocol last year.
Twitter, Facebook, Youtube and other social companies continue to face increasing scrutiny from academics and Congress alike for how their algorithms may reflect or reinforce people's unintended racial, gender-based, or political biases, as evidence continues to accumulate that most widely-used algorithms often reflect and then enhance the biases present in the data used to create them. All of the major tech companies have launched initiatives to attempt to appease the critics. Twitter has for more than a year been prioritizing what it calls "health" on its service, but today's announcement that the company will share how race and politics shape its algorithms is its most explicit effort to create transparency around this specific issue.
The company first planted a flag in this effort when it hired Chowdhury, a widely-respected pioneer in the field of applied algorithmic accountability and ethics, to lead the META team in February. For experts and academics in the artificial intelligence field, Chowdhury's appointment made Twitter's expressed commitment to algorithmic accountability and fairness much more credible. Su, the leader of the conversational safety team for product, was also hired within the last year. Williams joined Twitter in September, according to her LinkedIn profile.
AI ethics became a flashpoint more broadly in the tech industry after Google fired both of the women who had previously co-founded and led its AI ethics team, prominent researchers Timnit Gebru and Margaret Mitchell. Chowdhury's appointment at Twitter was announced on the same day that Google formally dismissed Mitchell and ended its investigation into Gebru's firing, setting (unintentionally, according to Twitter) a stark contrast between the two companies.