yesHayden FieldNone
×

Get access to Protocol

I’ve already subscribed

Will be used in accordance with our Privacy Policy

Power

DARPA is pouring millions into a new AI defense program. Here are the companies leading the charge

Intel and Georgia Tech are spearheading Pentagon-backed efforts to defend against attacks that could trick tomorrow's self-driving cars, facial recognition and weapons detection software.

A machine learning algorithm being spoofed by an adversarial attack.

Is that a person or a stop sign? Intel and Georgia Tech are spearheading efforts against adversarial attacks that fool machine learning systems into making such mistakes.

Photo: Georgia Institute of Technology

The Pentagon is teaming up with some of the biggest names in tech to combat hacks designed to mess with the automated systems we'll rely on in the near future.

In February, DARPA issued a call for proposals for a new program. Like most DARPA projects, it had a fantastic acronym: Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD). It's a multimillion-dollar, four-year initiative that's aiming to create defenses for sensor-based artificial intelligence — think facial recognition programs, voice recognition tools, self-driving cars, weapon-detection software and more.

Today, Protocol can report that DARPA has selected 17 organizations to work on the GARD project, including Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, SRI International and IBM's Almaden Research Center. Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks.

Sensors that use AI computer vision algorithms can be fooled by what researchers refer to as adversarial attacks. These are basically any hack to the physical world that tricks a system into seeing something other than what's there.

In the nascent self-driving car industry, there's already concern about how these attacks could manifest. There could be stickers, effectively invisible to the human eye, that render a stop sign unrecognizable. The stickers could trick other cars into clearing the road for your own vehicle, changing a roadside sign to make a vehicle turn into a certain business's parking lot, or, in the future, even more nefarious acts.

"We certainly know from the current world that there are people who want to create mischief, and there are one-off bad actors," Bruce Draper, GARD program manager at DARPA, told Protocol. "We also know that there are larger threats out there, and if you imagine a city that in the future has many, many self-driving cars in it, then if you suddenly cause all those self-driving cars to misbehave, you could really tie up a city. You could have a major impact. An even more frightening scenario would be if the first responder vehicles were self-driving and you could disable the first responders."

The project is split among three groups. One set of organizations will be looking at the theoretical basis for adversarial attacks on AI, why they happen and how a system can be vulnerable. Another group will be building the defenses against these attacks, and the last set of teams will serve as evaluators. Every six months, they'll test the defenses others built by throwing a new attack scenario their way and looking at criteria like effectiveness and practicality.

Over the next four years, DARPA will check in with each organization to evaluate their progress. But this is a research project — at least for now, there's no hard list of deliverables those involved need to provide to the agency.



Protocol Cloud, your weekly guide to the future of enterprise computing. Sign up now.



Intel was chosen to lead the physical adversarial attacks aspect of the project, as DARPA saw promise in the company's experience in simulating external environments for self-driving cars. Intel acquired Mobileye, a vehicle computer-vision sensor company, for $15 billion in 2017. Some consider Intel to be a dark horse in the race to build autonomous vehicles. It's made other bets in AI recently, and it also shed its smartphone modem business — which was playing catch-up to others like Qualcomm as the company missed the world's computing needs shift to mobile. Whether it has enough to win future battles in AI remains to be seen, but it's sitting on a strong base now. The company beat expectations in its last earnings report, posting over $20 billion in revenue for the quarter — nearly $1 billion more than expected. (That was, of course, before the world tumbled into a pandemic.)

The compounding computing problem

Arun Chandrasekaran, vice president and analyst at Gartner, said his research firm has seen an uptick in generative adversarial network (or GAN)-generated malicious attacks. Although the sophistication of both attack and defense technology varies greatly, he said, overall, "the detection and response to it… is not as advanced as the attacks themselves."

We're still "right on the cusp" of this problem, Draper said. The use of algorithms in military and civilian settings has skyrocketed in recent years, from providing movie recommendations to managing the stock market to making decisions on the battlefield. "What you see now is that dam breaking," Draper said. "I have a hard time projecting forward five or 10 years and thinking of any major aspect of either military or civilian life that doesn't have AI embedded into it."

And the problem is compounding: As the machine-learning framework and training data available to vendors become more widely available to hackers, Chandrasekaran said he believes malicious adversarial attacks will increase. And as the number of attacks increase, so too will their effectiveness.

"Fundamentally, this is really about trying to evade the existing detection systems," Chandrasekaran said. "You have existing systems that rely on a combination of images — in some cases, this could be voice and voice recognition… The fact that you could recreate something that's eerily similar to your voice or your image means that in many cases, you can essentially bypass the accuracy of your existing prevention system." These existing mechanisms must "significantly evolve," he added, to ensure confidence in their security.

Intel's currently focusing on the future — plugging in vulnerability holes and getting ahead of the threats downstream. "An important thing to know about this particular topic is this isn't a today threat," Jason Martin, a senior staff research scientist at Intel Labs, said. But it's a rarity in research to be able to spend time worrying about tomorrow's problems. "It's a nice place to be; it's not a 'panic now' sort of scenario," he said. "It's a 'calmly do the research and come up with the mitigations.'"

Intel and Georgia Tech have partnered on adversarial attack research for years. One of their focuses has been the ease with which bad actors can trick an algorithm into thinking a bird is a bicycle, for example, or mislabeling a stop sign — just by changing a few pixels.

The research so far, led by Duen Horng "Polo" Chau, associate professor of computing at Georgia Tech, has landed on an especially relevant takeaway: If you can't make something invulnerable from an attack, then make it computationally infeasible. For example, in some cryptography systems, there's some probability of an attacker figuring out the code key by using up considerable computing resources, but it's so improbable that it approaches impossible. Martin wants to approach the defense of physical adversarial attacks in a similar way: "The hope is that the combination of techniques in the defensive realm will make the cost of constructing an adversarial example too expensive," he said.

Intel and Georgia Tech's plan of attack

Intel and Georgia Tech plan to use some of the largest open-source image datasets — ImageNet and Microsoft's Common Objects in Context, for example. Open-source data is a DARPA requirement, and they're prioritizing research reproducibility. But those datasets are public and widely used. It begs the question: How does this effort differ from past ones?

Chau argues their innovation lies in how they plan to use the data. The plan boils down to teaching AI "coherence" — helping it see things more holistically, or, in a way, exercise common sense. It's something humans typically are born with or learn early on — and it's extremely difficult, if near impossible, to recreate with technology. Intel and Georgia Tech aim to provide three concrete solutions for a nebulous issue: temporal, semantic and spatial coherence.

Temporal coherence here relates to understanding of physics — things don't typically suddenly appear or disappear out of nowhere. For example, if a self-driving car registers a human, a stop sign or another object flickering into its view and then vanishing, then a hacker could be tampering with its system.

Semantic coherence relates to meaning. Humans identify things as a sum of their parts — a bird comprises eyes, wings and a beak, for example. The research team's plan is to incorporate a second line of defense into a sensing system — if it registers a bicycle, then it should next check for the wheel, handlebar and pedals. If it doesn't find those components, then something is likely wrong.



Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.



Then there's spatial coherence, or knowledge of the relative positioning of things. If an object detector senses people floating in midair, for example, then that should be a red flag. And for all three of these strategies, the team hopes to not only teach object detectors to flag an attack but also correct it.

"It's definitely a good place to start," Chandrasekaran said of Intel and Georgia Tech's plan. He compared it to Cornell University research that used a deep forgery discriminator (a system that discriminates between real and fake items) to use human-like judgment to gauge whether or not an image of a face was fake.

Implications for the future

Chau's long-term priority is scalability, he said — making the team's research and techniques transparent by delivering open-source software. He acknowledged the risk involved in publicizing information that bad actors could use to their advantage, but argued it's even more important for people in charge of the technology to be informed. And AI systems are often a black box anyway.

"The reality is that sometimes these AI and machine learning techniques do not work — and more dangerously, sometimes it works this second, and the next second a completely different [outcome is] produced," Chau said. "It's important to know, when it works well, why does it work so well and, more importantly, when someone is really deliberately attacking it, intentionally, what is it really exploiting?"

In order for the team to counter threats, it's vital for them to proactively discover vulnerabilities that bad actors aren't yet aware of. If they don't, bad actors could end up with the tools to disassemble any new techniques they use.

"Because we're not convinced that we'll necessarily find the perfect defense, we're trying to advance the theory [and] figure out, 'What are the limits?'" Draper said. "We're going to try to defend them as best we can, make them as invulnerable as possible, but we also want to have enough of a theoretical background to develop the theory in such a way that we can tell people, when they're deploying an AI system, the extent to which it may be vulnerable or not."

And for the critical systems of tomorrow, that could make all the difference. "If you're doing a system whose job it is to recommend movies, maybe you're not that terrified of an attack — how much damage could an adversary do?" Draper said. "But if you're doing a self-driving car, then someone's life is on the line."

Martin Cooper with his original DynaTAC cell phone.

Photo: Ted Soqui/Getty Images

Martin Cooper helped invent one of the most consequential and successful products in history: the cell phone. And almost five decades after he made the first public cell phone call, on a 2-pound brick of a device called the DynaTAC, he's written a book about his career called "Cutting the Cord: The Cell Phone has Transformed Humanity." In it he tells the story of the cell phone's invention, and looks at how it has changed the world and will continue to do so.

Cooper came on the Source Code Podcast to talk about his time at Motorola, the process of designing the first-ever cell phone, whether today's tech giants are monopolies, and why he's bullish on the future of AI.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editor at large. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Doxxing insurrectionists: Capitol riot divides online extremism researchers

The uprising has sparked a tense debate about the right way to stitch together the digital scraps of someone's life to publicly accuse them of committing a crime.

Rioters scale the U.S. Capitol walls during the insurrection.

Photo: Blink O'faneye/Flickr

Joan Donovan has a panic button in her office, just in case one of the online extremists she spends her days fighting tries to fight back.

"This is not baby shit," Donovan, who is research director of Harvard's Shorenstein Center on Media, Politics and Public Policy, said. "You do not fuck around with these people in public."

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.

The current state-of-the-art quantum computers are a tangle of wires. And that can't be the case in the future.

Photo: IBM Research

The iconic image of quantum computing is the "Google chandelier," with its hundreds of intricately arranged copper wires descending like the tendrils of a metallic jellyfish. It's a grand and impressive device, but in that tangle of wires lurks a big problem.

"If you're thinking about the long-term prospects of quantum computing, that image should be just terrifying," Jim Clarke, the director of quantum hardware at Intel, told Protocol.

Keep Reading Show less
Dan Garisto
Dan Garisto is a freelance science journalist who specializes in the physical sciences, with an emphasis on particle physics. He has an undergraduate degree in physics and is based in New York.

How Facebook decided to deploy its AI lab when the pandemic hit

There were a lot of ideas from a very enthusiastic team, and they all had to be guided by the gentle hand of the lab's managing director.

Facebook has built tools that forecast the spread of the COVID-19 at the county level, as shown in this animation.

Image: Facebook

In late February, as coronavirus took hold in Europe, governments around the world were still resisting lockdowns. In Facebook's artificial intelligence lab in Paris, data scientists and machine learning researchers were dumbstruck.

"I mean, they are data nerds, right?" Antoine Bordes, managing director of Facebook AI Research, told Protocol. "There [were] really a lot of people [here], heads down in the data, who knew this [pandemic] was coming."

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the news editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

People

Tom Siebel takes a victory lap after C3.ai’s blockbuster debut on Wall Street

C3.ai raised $651 million in its first day of trading on the New York Stock Exchange.

C3.ai shares closed at $92.49 on Wednesday.

Photo: NYSE

Tom Siebel strikes again.

Shares of C3.ai, the company he founded in 2009, just went gangbusters on the company's first day of trading on the New York Stock Exchange. Living up to its ticker "AI," the firm offers catered solutions that help clients like Shell and the U.S. Air Force, among others, predict when their machines may need maintenance.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Latest Stories