The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s.
Back then, some companies found they could actually make themselves safer by incentivizing the work of independent “white hat” security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That’s how the practice of bug bounties became a cornerstone of cybersecurity today.
In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in — this time, by putting bounties on harms that might originate in their artificial intelligence systems.
François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate’s probe of Russia’s attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and “combines art and research to illuminate the social implications and harms of artificial intelligence.” The group's founder, Buolamwini, and others affiliated with AJL have long been working to expose racial bias in facial-recognition technologies in particular.
François and Kenway made clear to Protocol that all the efforts in the world to mitigate AI harms are for naught if programs don’t nurture a diverse community of bug hunters — including from beyond computer science — and, if companies aren’t ready to “digest” the information, they should incorporate feedback so that it actually affects how AI systems are conceived, developed, deployed and maintained.
And yes — regulation is key too, they said.
This interview has been edited for length and clarity.
Talk about bug bounties and cybersecurity research, for anybody who’s not familiar.
François: Bug bounties are a traditional practice in infosec, by which you reward hackers for finding and disclosing bugs to affected organizations. If you look at all those big companies, they usually do have a bug bounty program.
And bug bounties, broadly conceived, have become a crucial part of cybersecurity research, vulnerability testing and information security over the last 25, 30 years, it sounds like?
François: Yes.
So why take that idea and apply it to potential algorithmic harms? Why is this the moment to do that?
François: We wrote about 100 pages on this. [Laughs] This project started because Joy and I were both interested in that idea. I was seeing that, in the space of algorithmic harms, you had these really talented individuals who were going above and beyond to find and document these harms, but A) more often than not, it ended up being unpaid labor, which is a problem, and B) there really [wasn’t] any sort of system of protection around them and there [were] often very adversarial responses from the industry. And it made me think in many ways of the situations that hackers found themselves in before the professionalization of [bug bounties]. And so I was thinking: Maybe we can look to cybersecurity for inspiration on how to better support, how to better protect and how to pay for what is really important labor.
Where have we already seen this? You use Twitter [which has a bug bounty related to potential bias in its image-cropping algorithm] as a case study. You mentioned a Google program that’s been around a few years, and it’s gotten about 1,000 reports documenting evidence of data misuse, like a third party not abiding by its contract terms in using Google data.
François: Examples that we document talk about [traditional] bug bounties that are stretching to what we call socio-technical issues. And indeed, one of the first things that we looked into are those data abuse bounties, which seemed like they were an industry responding to big issues and big scandals like Cambridge Analytica.
We also saw some really interesting examples that were not as obvious. So we talked a little bit about what Rockstar Games is doing, having a bounty that talks about incorrect use of algorithms [that ban cheaters]. I think it’s a good trend, but I think it also warrants more research and more discussions around best practices.
I’m always struck by how much bug bounties exist at the behest of companies that have reason to say: "Look, this is all the things we do to stop algorithmic harms." But of course a company doesn’t want to be embarrassed, doesn’t want proprietary systems being probed and potentially copied.
François: There’s a lot of good research on: What does it take for a company to be meaningfully ready to actually bring this in? And there’s a willingness question, of course — like, is the target even willing to hear about how ugly their baby is? Is your company emotionally ready for that? But then there’s a whole organizational readiness question. One of the metaphors that we really liked and resonated with us is: It’s not enough to say, "I will ingest that." What you need is to have the digestive systems really to be able to process and meaningfully use this information.
Bug bounties have become a crucial part of cybersecurity research, vulnerability testing and information security.Illustration: clarote
What can the policy world do? The Computer Fraud and Abuse Act and court interpretations basically push against this kind of research. On the other hand, it has become a fact of life, and hackers and companies have learned to live with each other — a lot of times, through contractual terms.
François: If you’re in the business of protecting independent security researchers, a lot of what you do and a lot of the institutions who are helping actually may also be able to help people who do algorithmic harms research. Now, it doesn’t mean that all their legal issues will be covered, but I think there’s a lot to learn. Something that we thought was interesting too is, when you look at the history of [cybersecurity] bug bounties, one of those big moments is the DOD running its own program. We think that’s interesting! Can public institutions also run their own algorithmic harms bounties?
Kenway: There are specific provisions in U.S. law that have been put in place to afford security researchers some degree of protection. For example, the [Digital Millennium Copyright Act] carveout that exists, and I think there are some examples where similar carveouts or similar provisions could meaningfully support legal safety for algorithmic harms researchers. But how governments could incentivize or create institutions to support this kind of engagement, I think is probably a question that will be wide open for the next couple of years.
Get really concrete with me: Who do you envision an algorithmic harms bug bounty researcher or hacker being? Is this a 21-year-old CS major working in her dorm room on a Thursday night before dinner?
François: I think the thing that is most exciting to us is this idea that, if you create a community of practice, you will reach out to people who are very different. So for instance, say we decide that some of the monitoring software that has been deployed in schools throughout the pandemic is a good target to encourage people to find algorithmic harms in. You will likely end up with professors who have been very bothered, and who do have a little bit of computer science background. You will end up with students who really care a lot about these issues. You will end up with people who are coming at it from a reverse-engineering skill set.
One of the insidious things about AI harms is that they can enter as soon as you have an idea for a project or a system. The training data obviously can be highly discriminatory. How do you make sure that you’re really identifying the harms?
François: Security is not something you sprinkle at the end of designing and deploying a product. It accompanies you throughout the life cycle of a product, including thinking about new issues that arise once your product is deployed in the world, and that idea really does translate to the space of algorithmic harms. And so, when people think about algorithmic harms, sometimes there is an overemphasis on what happens at the training-data level. Sometimes you hear people [saying] that, if there is a problem with a model, it is because the training [data] was bad. Whereas in reality, similarly to how cybersecurity incidents arise, issues can arise at different moments of this AI or machine-learning life cycle.
Where do you anticipate that these kinds of reports should ultimately be landing? Is it the algorithms’ developers themselves? Regulators? Communities of researchers? Or is it the media and the public?
François: These are different ways you can design it. I think at the end of the day, there are many different stakeholders who do benefit from having visibility on these AI harms. Of course, journalists have done a wonderful job exposing some of those first large-scale AI harms. We also think about vendors themselves. It’s super helpful for the operators. And it’s also helpful for peers.
In cybersecurity, there is this issue of widely shared insecure critical components. If you do find a bug in one of these, it’s actually quite important for the rest of the industry to know because those components come again and again and again, in multiple programs in multiple products. We do have the same in the algorithmic harms field. There are, for instance, core databases that have been reused for different models. And of course, we think that there are civil society organizations who would really benefit from more visibility. For instance, organizations who are set up to protect civil rights and human rights. And finally, we are hoping that governments and regulators could also be interested in this. Long story short, I have now given you a grand tour of the world to say that multiple organizations would benefit from having more visibility into the different types of algorithmic harms.
How do you make sure that bug bounty hunting isn’t just outsourcing the work to make sure that your algorithmic system isn’t harming people?
François: It should absolutely be one component of many. Having more mature internal practices for tech companies matters a lot. Of course, we need a pen testing and auditing industry. We also need strong CISOs. Bug bounties are helpful. We are slowly going to figure out all these components for dealing with algorithmic harm, and we definitely argue that, if bug bounties play a role, it is definitely not a substitute to having a responsible industry that is well regulated. Maybe bug bounties could help! But this is not either/or.
Correction: This story was updated Jan. 27, 2022 to link to a newer version of the paper with additional named co-authors.