Policy

AI bias is rampant. Bug bounties could help catch it.

A Q&A with cybersecurity guru Camille François about her new research on bug bounties, and the hope that they could help rein in AI harms.

Illustration of eyes and bugs

Developers of harmful AI need to know "how ugly their baby is," Camille François said.

Illustration: clarote

The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s.

Back then, some companies found they could actually make themselves safer by incentivizing the work of independent “white hat” security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That’s how the practice of bug bounties became a cornerstone of cybersecurity today.

In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in — this time, by putting bounties on harms that might originate in their artificial intelligence systems.

François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate’s probe of Russia’s attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and “combines art and research to illuminate the social implications and harms of artificial intelligence.” The group's founder, Buolamwini, and others affiliated with AJL have long been working to expose racial bias in facial-recognition technologies in particular.

François and Kenway made clear to Protocol that all the efforts in the world to mitigate AI harms are for naught if programs don’t nurture a diverse community of bug hunters — including from beyond computer science — and, if companies aren’t ready to “digest” the information, they should incorporate feedback so that it actually affects how AI systems are conceived, developed, deployed and maintained.

And yes — regulation is key too, they said.

This interview has been edited for length and clarity.

Talk about bug bounties and cybersecurity research, for anybody who’s not familiar.

François: Bug bounties are a traditional practice in infosec, by which you reward hackers for finding and disclosing bugs to affected organizations. If you look at all those big companies, they usually do have a bug bounty program.

And bug bounties, broadly conceived, have become a crucial part of cybersecurity research, vulnerability testing and information security over the last 25, 30 years, it sounds like?

François: Yes.

So why take that idea and apply it to potential algorithmic harms? Why is this the moment to do that?

François: We wrote about 100 pages on this. [Laughs] This project started because Joy and I were both interested in that idea. I was seeing that, in the space of algorithmic harms, you had these really talented individuals who were going above and beyond to find and document these harms, but A) more often than not, it ended up being unpaid labor, which is a problem, and B) there really [wasn’t] any sort of system of protection around them and there [were] often very adversarial responses from the industry. And it made me think in many ways of the situations that hackers found themselves in before the professionalization of [bug bounties]. And so I was thinking: Maybe we can look to cybersecurity for inspiration on how to better support, how to better protect and how to pay for what is really important labor.

Where have we already seen this? You use Twitter [which has a bug bounty related to potential bias in its image-cropping algorithm] as a case study. You mentioned a Google program that’s been around a few years, and it’s gotten about 1,000 reports documenting evidence of data misuse, like a third party not abiding by its contract terms in using Google data.

François: Examples that we document talk about [traditional] bug bounties that are stretching to what we call socio-technical issues. And indeed, one of the first things that we looked into are those data abuse bounties, which seemed like they were an industry responding to big issues and big scandals like Cambridge Analytica.

We also saw some really interesting examples that were not as obvious. So we talked a little bit about what Rockstar Games is doing, having a bounty that talks about incorrect use of algorithms [that ban cheaters]. I think it’s a good trend, but I think it also warrants more research and more discussions around best practices.

I’m always struck by how much bug bounties exist at the behest of companies that have reason to say: "Look, this is all the things we do to stop algorithmic harms." But of course a company doesn’t want to be embarrassed, doesn’t want proprietary systems being probed and potentially copied.

François: There’s a lot of good research on: What does it take for a company to be meaningfully ready to actually bring this in? And there’s a willingness question, of course — like, is the target even willing to hear about how ugly their baby is? Is your company emotionally ready for that? But then there’s a whole organizational readiness question. One of the metaphors that we really liked and resonated with us is: It’s not enough to say, "I will ingest that." What you need is to have the digestive systems really to be able to process and meaningfully use this information.

Illustration with chips and net Bug bounties have become a crucial part of cybersecurity research, vulnerability testing and information security.Illustration: clarote

What can the policy world do? The Computer Fraud and Abuse Act and court interpretations basically push against this kind of research. On the other hand, it has become a fact of life, and hackers and companies have learned to live with each other — a lot of times, through contractual terms.

François: If you’re in the business of protecting independent security researchers, a lot of what you do and a lot of the institutions who are helping actually may also be able to help people who do algorithmic harms research. Now, it doesn’t mean that all their legal issues will be covered, but I think there’s a lot to learn. Something that we thought was interesting too is, when you look at the history of [cybersecurity] bug bounties, one of those big moments is the DOD running its own program. We think that’s interesting! Can public institutions also run their own algorithmic harms bounties?

Kenway: There are specific provisions in U.S. law that have been put in place to afford security researchers some degree of protection. For example, the [Digital Millennium Copyright Act] carveout that exists, and I think there are some examples where similar carveouts or similar provisions could meaningfully support legal safety for algorithmic harms researchers. But how governments could incentivize or create institutions to support this kind of engagement, I think is probably a question that will be wide open for the next couple of years.

Get really concrete with me: Who do you envision an algorithmic harms bug bounty researcher or hacker being? Is this a 21-year-old CS major working in her dorm room on a Thursday night before dinner?

François: I think the thing that is most exciting to us is this idea that, if you create a community of practice, you will reach out to people who are very different. So for instance, say we decide that some of the monitoring software that has been deployed in schools throughout the pandemic is a good target to encourage people to find algorithmic harms in. You will likely end up with professors who have been very bothered, and who do have a little bit of computer science background. You will end up with students who really care a lot about these issues. You will end up with people who are coming at it from a reverse-engineering skill set.

One of the insidious things about AI harms is that they can enter as soon as you have an idea for a project or a system. The training data obviously can be highly discriminatory. How do you make sure that you’re really identifying the harms?

François: Security is not something you sprinkle at the end of designing and deploying a product. It accompanies you throughout the life cycle of a product, including thinking about new issues that arise once your product is deployed in the world, and that idea really does translate to the space of algorithmic harms. And so, when people think about algorithmic harms, sometimes there is an overemphasis on what happens at the training-data level. Sometimes you hear people [saying] that, if there is a problem with a model, it is because the training [data] was bad. Whereas in reality, similarly to how cybersecurity incidents arise, issues can arise at different moments of this AI or machine-learning life cycle.

Where do you anticipate that these kinds of reports should ultimately be landing? Is it the algorithms’ developers themselves? Regulators? Communities of researchers? Or is it the media and the public?

François: These are different ways you can design it. I think at the end of the day, there are many different stakeholders who do benefit from having visibility on these AI harms. Of course, journalists have done a wonderful job exposing some of those first large-scale AI harms. We also think about vendors themselves. It’s super helpful for the operators. And it’s also helpful for peers.

In cybersecurity, there is this issue of widely shared insecure critical components. If you do find a bug in one of these, it’s actually quite important for the rest of the industry to know because those components come again and again and again, in multiple programs in multiple products. We do have the same in the algorithmic harms field. There are, for instance, core databases that have been reused for different models. And of course, we think that there are civil society organizations who would really benefit from more visibility. For instance, organizations who are set up to protect civil rights and human rights. And finally, we are hoping that governments and regulators could also be interested in this. Long story short, I have now given you a grand tour of the world to say that multiple organizations would benefit from having more visibility into the different types of algorithmic harms.

How do you make sure that bug bounty hunting isn’t just outsourcing the work to make sure that your algorithmic system isn’t harming people?

François: It should absolutely be one component of many. Having more mature internal practices for tech companies matters a lot. Of course, we need a pen testing and auditing industry. We also need strong CISOs. Bug bounties are helpful. We are slowly going to figure out all these components for dealing with algorithmic harm, and we definitely argue that, if bug bounties play a role, it is definitely not a substitute to having a responsible industry that is well regulated. Maybe bug bounties could help! But this is not either/or.


Correction: This story was updated Jan. 27, 2022 to link to a newer version of the paper with additional named co-authors.

Climate

The minerals we need to save the planet are getting way too expensive

Supply chain problems and rising demand have sent prices spiraling upward for the minerals and metals essential for the clean energy transition.

Critical mineral prices have exploded over the past year.

Photo: Andrey Rudakov/Bloomberg via Getty Images

The newest source of the alarm bells echoing throughout the renewables industry? Spiking critical mineral and metal prices.

According to a new report from the International Energy Agency, a maelstrom of rising demand and tattered supply chains have caused prices for the materials needed for clean energy technologies to soar in the last year. And this increase has only accelerated since 2022 began.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Enterprise

The 911 system is outdated. Updating it to the cloud is risky.

Unlike tech companies, emergency services departments can’t afford to make mistakes when migrating to the cloud. Integrating new software in an industry where there’s no margin for error is risky, and sometimes deadly.

In an industry where seconds can mean the difference between life and death, many public safety departments are hesitant to take risks on new cloud-based technologies.

Illustration: Christopher T. Fong/Protocol

Dialing 911 could be the most important phone call you will ever make. But what happens when the software that’s supposed to deliver that call fails you? It may seem simple, but the technology behind a call for help is complicated, and when it fails, deadly.

The infrastructure supporting emergency contact centers is one of the most critical assets for any city, town or local government. But just as the pandemic exposed the creaky tech infrastructure that runs local governments, in many cases the technology in those call centers is outdated and hasn’t been touched for decades.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Entertainment

'The Wilds' is a must-watch guilty pleasure and more weekend recs

Don’t know what to do this weekend? We’ve got you covered.

Our favorite things this week.

Illustration: Protocol

The East Coast is getting a little preview of summer this weekend. If you want to stay indoors and beat the heat, we have a few suggestions this week to keep you entertained, like a new season of Amazon Prime’s guilty-pleasure show, “The Wilds,” a new game from Horizon Worlds that’s fun for everyone and a sneak peek from Adam Mosseri into what Instagram is thinking about Web3.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Workplace

Work expands to fill the time – but only if you let it

The former Todoist productivity expert drops time-blocking tips, lofi beats playlists for concentrating and other knowledge bombs.

“I do hope the productivity space as a whole is more intentional about pushing narratives that are about life versus just work.”

Photo: Courtesy of Fadeke Adegbuyi

Fadeke Adegbuyi knows how to dole out productivity advice. When she was a marketing manager at Doist, she taught users via blogs and newsletters about how to better organize their lives. Doist, the company behind to-do-list app Todoist and messaging app Twist, has pushed remote and asynchronous work for years. Adegbuyi’s job was to translate these ideas to the masses.

“We were thinking about asynchronous communication from a work point of view, of like: What is most effective for doing ambitious and awesome work, and also, what is most advantageous for living a life that feels balanced?” Adegbuyi said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Latest Stories
Bulletins