AI bias is rampant. Bug bounties could help catch it.

A Q&A with cybersecurity guru Camille François about her new research on bug bounties, and the hope that they could help rein in AI harms.

Illustration of eyes and bugs

Developers of harmful AI need to know "how ugly their baby is," Camille François said.

Illustration: clarote

The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s.

Back then, some companies found they could actually make themselves safer by incentivizing the work of independent “white hat” security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That’s how the practice of bug bounties became a cornerstone of cybersecurity today.

In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in — this time, by putting bounties on harms that might originate in their artificial intelligence systems.

François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate’s probe of Russia’s attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and “combines art and research to illuminate the social implications and harms of artificial intelligence.” The group's founder, Buolamwini, and others affiliated with AJL have long been working to expose racial bias in facial-recognition technologies in particular.

François and Kenway made clear to Protocol that all the efforts in the world to mitigate AI harms are for naught if programs don’t nurture a diverse community of bug hunters — including from beyond computer science — and, if companies aren’t ready to “digest” the information, they should incorporate feedback so that it actually affects how AI systems are conceived, developed, deployed and maintained.

And yes — regulation is key too, they said.

This interview has been edited for length and clarity.

Talk about bug bounties and cybersecurity research, for anybody who’s not familiar.

François: Bug bounties are a traditional practice in infosec, by which you reward hackers for finding and disclosing bugs to affected organizations. If you look at all those big companies, they usually do have a bug bounty program.

And bug bounties, broadly conceived, have become a crucial part of cybersecurity research, vulnerability testing and information security over the last 25, 30 years, it sounds like?

François: Yes.

So why take that idea and apply it to potential algorithmic harms? Why is this the moment to do that?

François: We wrote about 100 pages on this. [Laughs] This project started because Joy and I were both interested in that idea. I was seeing that, in the space of algorithmic harms, you had these really talented individuals who were going above and beyond to find and document these harms, but A) more often than not, it ended up being unpaid labor, which is a problem, and B) there really [wasn’t] any sort of system of protection around them and there [were] often very adversarial responses from the industry. And it made me think in many ways of the situations that hackers found themselves in before the professionalization of [bug bounties]. And so I was thinking: Maybe we can look to cybersecurity for inspiration on how to better support, how to better protect and how to pay for what is really important labor.

Where have we already seen this? You use Twitter [which has a bug bounty related to potential bias in its image-cropping algorithm] as a case study. You mentioned a Google program that’s been around a few years, and it’s gotten about 1,000 reports documenting evidence of data misuse, like a third party not abiding by its contract terms in using Google data.

François: Examples that we document talk about [traditional] bug bounties that are stretching to what we call socio-technical issues. And indeed, one of the first things that we looked into are those data abuse bounties, which seemed like they were an industry responding to big issues and big scandals like Cambridge Analytica.

We also saw some really interesting examples that were not as obvious. So we talked a little bit about what Rockstar Games is doing, having a bounty that talks about incorrect use of algorithms [that ban cheaters]. I think it’s a good trend, but I think it also warrants more research and more discussions around best practices.

I’m always struck by how much bug bounties exist at the behest of companies that have reason to say: "Look, this is all the things we do to stop algorithmic harms." But of course a company doesn’t want to be embarrassed, doesn’t want proprietary systems being probed and potentially copied.

François: There’s a lot of good research on: What does it take for a company to be meaningfully ready to actually bring this in? And there’s a willingness question, of course — like, is the target even willing to hear about how ugly their baby is? Is your company emotionally ready for that? But then there’s a whole organizational readiness question. One of the metaphors that we really liked and resonated with us is: It’s not enough to say, "I will ingest that." What you need is to have the digestive systems really to be able to process and meaningfully use this information.

Illustration with chips and net Bug bounties have become a crucial part of cybersecurity research, vulnerability testing and information security.Illustration: clarote

What can the policy world do? The Computer Fraud and Abuse Act and court interpretations basically push against this kind of research. On the other hand, it has become a fact of life, and hackers and companies have learned to live with each other — a lot of times, through contractual terms.

François: If you’re in the business of protecting independent security researchers, a lot of what you do and a lot of the institutions who are helping actually may also be able to help people who do algorithmic harms research. Now, it doesn’t mean that all their legal issues will be covered, but I think there’s a lot to learn. Something that we thought was interesting too is, when you look at the history of [cybersecurity] bug bounties, one of those big moments is the DOD running its own program. We think that’s interesting! Can public institutions also run their own algorithmic harms bounties?

Kenway: There are specific provisions in U.S. law that have been put in place to afford security researchers some degree of protection. For example, the [Digital Millennium Copyright Act] carveout that exists, and I think there are some examples where similar carveouts or similar provisions could meaningfully support legal safety for algorithmic harms researchers. But how governments could incentivize or create institutions to support this kind of engagement, I think is probably a question that will be wide open for the next couple of years.

Get really concrete with me: Who do you envision an algorithmic harms bug bounty researcher or hacker being? Is this a 21-year-old CS major working in her dorm room on a Thursday night before dinner?

François: I think the thing that is most exciting to us is this idea that, if you create a community of practice, you will reach out to people who are very different. So for instance, say we decide that some of the monitoring software that has been deployed in schools throughout the pandemic is a good target to encourage people to find algorithmic harms in. You will likely end up with professors who have been very bothered, and who do have a little bit of computer science background. You will end up with students who really care a lot about these issues. You will end up with people who are coming at it from a reverse-engineering skill set.

One of the insidious things about AI harms is that they can enter as soon as you have an idea for a project or a system. The training data obviously can be highly discriminatory. How do you make sure that you’re really identifying the harms?

François: Security is not something you sprinkle at the end of designing and deploying a product. It accompanies you throughout the life cycle of a product, including thinking about new issues that arise once your product is deployed in the world, and that idea really does translate to the space of algorithmic harms. And so, when people think about algorithmic harms, sometimes there is an overemphasis on what happens at the training-data level. Sometimes you hear people [saying] that, if there is a problem with a model, it is because the training [data] was bad. Whereas in reality, similarly to how cybersecurity incidents arise, issues can arise at different moments of this AI or machine-learning life cycle.

Where do you anticipate that these kinds of reports should ultimately be landing? Is it the algorithms’ developers themselves? Regulators? Communities of researchers? Or is it the media and the public?

François: These are different ways you can design it. I think at the end of the day, there are many different stakeholders who do benefit from having visibility on these AI harms. Of course, journalists have done a wonderful job exposing some of those first large-scale AI harms. We also think about vendors themselves. It’s super helpful for the operators. And it’s also helpful for peers.

In cybersecurity, there is this issue of widely shared insecure critical components. If you do find a bug in one of these, it’s actually quite important for the rest of the industry to know because those components come again and again and again, in multiple programs in multiple products. We do have the same in the algorithmic harms field. There are, for instance, core databases that have been reused for different models. And of course, we think that there are civil society organizations who would really benefit from more visibility. For instance, organizations who are set up to protect civil rights and human rights. And finally, we are hoping that governments and regulators could also be interested in this. Long story short, I have now given you a grand tour of the world to say that multiple organizations would benefit from having more visibility into the different types of algorithmic harms.

How do you make sure that bug bounty hunting isn’t just outsourcing the work to make sure that your algorithmic system isn’t harming people?

François: It should absolutely be one component of many. Having more mature internal practices for tech companies matters a lot. Of course, we need a pen testing and auditing industry. We also need strong CISOs. Bug bounties are helpful. We are slowly going to figure out all these components for dealing with algorithmic harm, and we definitely argue that, if bug bounties play a role, it is definitely not a substitute to having a responsible industry that is well regulated. Maybe bug bounties could help! But this is not either/or.

Correction: This story was updated Jan. 27, 2022 to link to a newer version of the paper with additional named co-authors.


How I decided to leave the US and pursue a tech career in Europe

Melissa Di Donato moved to Europe to broaden her technology experience with a different market perspective. She planned to stay two years. Seventeen years later, she remains in London as CEO of Suse.

“It was a hard go for me in the beginning. I was entering inside of a company that had been very traditional in a sense.”

Photo: Suse

Click banner image for more How I decided seriesA native New Yorker, Melissa Di Donato made a life-changing decision back in 2005 when she packed up for Europe to further her career in technology. Then with IBM, she made London her new home base.

Today, Di Donato is CEO of Germany’s Suse, now a 30-year-old, open-source enterprise software company that specializes in Linux operating systems, container management, storage, and edge computing. As the company’s first female leader, she has led Suse through the coronavirus pandemic, a 2021 IPO on the Frankfurt Stock Exchange, and the acquisitions of Kubernetes management startup Rancher Labs and container security company NeuVector.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

UiPath had a rocky few years. Rob Enslin wants to turn it around.

Protocol caught up with Enslin, named earlier this year as UiPath’s co-CEO, to discuss why he left Google Cloud, the untapped potential of robotic-process automation, and how he plans to lead alongside founder Daniel Dines.

Rob Enslin, UiPath's co-CEO, chats with Protocol about the company's future.

Photo: UiPath

UiPath has had a shaky history.

The company, which helps companies automate business processes, went public in 2021 at a valuation of more than $30 billion, but now the company’s market capitalization is only around $7 billion. To add insult to injury, UiPath laid off 5% of its staff in June and then lowered its full-year guidance for fiscal year 2023 just months later, tanking its stock by 15%.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.


Figma CPO: We can do more with Adobe

Yuhki Yamashita thinks Figma might tackle video or 3D objects someday.

Figman CPO Yuhki Yamashita told Protocol about Adobe's acquisition of the company.

Photo: Figma

Figma CPO Yuhki Yamashita’s first design gig was at The Harvard Crimson, waiting for writers to file their stories so he could lay them out in Adobe InDesign. Given his interest in computer science, pursuing UX design became the clear move. He worked on Outlook at Microsoft, YouTube at Google, and user experience at Uber, where he was a very early user of Figma. In 2019, he became a VP of product at Figma; this past June, he became CPO.

“Design has been really near and dear to my heart, which is why when this opportunity came along to join Figma and rethink design, it was such an obvious opportunity,” Yamashita said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.


Microsoft lays out its climate advocacy goals

The tech giant has staked out exactly what kind of policies it will support to decarbonize the world and clean up the grid.

Microsoft published two briefs explaining what new climate policies it will advocate for.

Photo by Jeremy Bezanger on Unsplash

The tech industry has no shortage of climate goals, but they’ll be very hard to achieve without the help of sound public policy.

Microsoft published two new briefs on Sept. 22 explaining what policies it will advocate for in the realm of reducing carbon and cleaning up the grid. With policymakers in the U.S. and around the world beginning to weigh more stringent climate policies (or in the U.S.’s case, any serious climate policies at all), the briefs will offer a measuring stick for whether Microsoft is living up to its ideals.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Latest Stories