Power

DARPA is pouring millions into a new AI defense program. Here are the companies leading the charge

Intel and Georgia Tech are spearheading Pentagon-backed efforts to defend against attacks that could trick tomorrow's self-driving cars, facial recognition and weapons detection software.

A machine learning algorithm being spoofed by an adversarial attack.

Is that a person or a stop sign? Intel and Georgia Tech are spearheading efforts against adversarial attacks that fool machine learning systems into making such mistakes.

Photo: Georgia Institute of Technology

The Pentagon is teaming up with some of the biggest names in tech to combat hacks designed to mess with the automated systems we'll rely on in the near future.

In February, DARPA issued a call for proposals for a new program. Like most DARPA projects, it had a fantastic acronym: Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD). It's a multimillion-dollar, four-year initiative that's aiming to create defenses for sensor-based artificial intelligence — think facial recognition programs, voice recognition tools, self-driving cars, weapon-detection software and more.

Today, Protocol can report that DARPA has selected 17 organizations to work on the GARD project, including Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, SRI International and IBM's Almaden Research Center. Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks.

Sensors that use AI computer vision algorithms can be fooled by what researchers refer to as adversarial attacks. These are basically any hack to the physical world that tricks a system into seeing something other than what's there.

In the nascent self-driving car industry, there's already concern about how these attacks could manifest. There could be stickers, effectively invisible to the human eye, that render a stop sign unrecognizable. The stickers could trick other cars into clearing the road for your own vehicle, changing a roadside sign to make a vehicle turn into a certain business's parking lot, or, in the future, even more nefarious acts.

"We certainly know from the current world that there are people who want to create mischief, and there are one-off bad actors," Bruce Draper, GARD program manager at DARPA, told Protocol. "We also know that there are larger threats out there, and if you imagine a city that in the future has many, many self-driving cars in it, then if you suddenly cause all those self-driving cars to misbehave, you could really tie up a city. You could have a major impact. An even more frightening scenario would be if the first responder vehicles were self-driving and you could disable the first responders."

The project is split among three groups. One set of organizations will be looking at the theoretical basis for adversarial attacks on AI, why they happen and how a system can be vulnerable. Another group will be building the defenses against these attacks, and the last set of teams will serve as evaluators. Every six months, they'll test the defenses others built by throwing a new attack scenario their way and looking at criteria like effectiveness and practicality.

Over the next four years, DARPA will check in with each organization to evaluate their progress. But this is a research project — at least for now, there's no hard list of deliverables those involved need to provide to the agency.



Protocol Cloud, your weekly guide to the future of enterprise computing. Sign up now.



Intel was chosen to lead the physical adversarial attacks aspect of the project, as DARPA saw promise in the company's experience in simulating external environments for self-driving cars. Intel acquired Mobileye, a vehicle computer-vision sensor company, for $15 billion in 2017. Some consider Intel to be a dark horse in the race to build autonomous vehicles. It's made other bets in AI recently, and it also shed its smartphone modem business — which was playing catch-up to others like Qualcomm as the company missed the world's computing needs shift to mobile. Whether it has enough to win future battles in AI remains to be seen, but it's sitting on a strong base now. The company beat expectations in its last earnings report, posting over $20 billion in revenue for the quarter — nearly $1 billion more than expected. (That was, of course, before the world tumbled into a pandemic.)

The compounding computing problem

Arun Chandrasekaran, vice president and analyst at Gartner, said his research firm has seen an uptick in generative adversarial network (or GAN)-generated malicious attacks. Although the sophistication of both attack and defense technology varies greatly, he said, overall, "the detection and response to it… is not as advanced as the attacks themselves."

We're still "right on the cusp" of this problem, Draper said. The use of algorithms in military and civilian settings has skyrocketed in recent years, from providing movie recommendations to managing the stock market to making decisions on the battlefield. "What you see now is that dam breaking," Draper said. "I have a hard time projecting forward five or 10 years and thinking of any major aspect of either military or civilian life that doesn't have AI embedded into it."

And the problem is compounding: As the machine-learning framework and training data available to vendors become more widely available to hackers, Chandrasekaran said he believes malicious adversarial attacks will increase. And as the number of attacks increase, so too will their effectiveness.

"Fundamentally, this is really about trying to evade the existing detection systems," Chandrasekaran said. "You have existing systems that rely on a combination of images — in some cases, this could be voice and voice recognition… The fact that you could recreate something that's eerily similar to your voice or your image means that in many cases, you can essentially bypass the accuracy of your existing prevention system." These existing mechanisms must "significantly evolve," he added, to ensure confidence in their security.

Intel's currently focusing on the future — plugging in vulnerability holes and getting ahead of the threats downstream. "An important thing to know about this particular topic is this isn't a today threat," Jason Martin, a senior staff research scientist at Intel Labs, said. But it's a rarity in research to be able to spend time worrying about tomorrow's problems. "It's a nice place to be; it's not a 'panic now' sort of scenario," he said. "It's a 'calmly do the research and come up with the mitigations.'"

Intel and Georgia Tech have partnered on adversarial attack research for years. One of their focuses has been the ease with which bad actors can trick an algorithm into thinking a bird is a bicycle, for example, or mislabeling a stop sign — just by changing a few pixels.

The research so far, led by Duen Horng "Polo" Chau, associate professor of computing at Georgia Tech, has landed on an especially relevant takeaway: If you can't make something invulnerable from an attack, then make it computationally infeasible. For example, in some cryptography systems, there's some probability of an attacker figuring out the code key by using up considerable computing resources, but it's so improbable that it approaches impossible. Martin wants to approach the defense of physical adversarial attacks in a similar way: "The hope is that the combination of techniques in the defensive realm will make the cost of constructing an adversarial example too expensive," he said.

Intel and Georgia Tech's plan of attack

Intel and Georgia Tech plan to use some of the largest open-source image datasets — ImageNet and Microsoft's Common Objects in Context, for example. Open-source data is a DARPA requirement, and they're prioritizing research reproducibility. But those datasets are public and widely used. It begs the question: How does this effort differ from past ones?

Chau argues their innovation lies in how they plan to use the data. The plan boils down to teaching AI "coherence" — helping it see things more holistically, or, in a way, exercise common sense. It's something humans typically are born with or learn early on — and it's extremely difficult, if near impossible, to recreate with technology. Intel and Georgia Tech aim to provide three concrete solutions for a nebulous issue: temporal, semantic and spatial coherence.

Temporal coherence here relates to understanding of physics — things don't typically suddenly appear or disappear out of nowhere. For example, if a self-driving car registers a human, a stop sign or another object flickering into its view and then vanishing, then a hacker could be tampering with its system.

Semantic coherence relates to meaning. Humans identify things as a sum of their parts — a bird comprises eyes, wings and a beak, for example. The research team's plan is to incorporate a second line of defense into a sensing system — if it registers a bicycle, then it should next check for the wheel, handlebar and pedals. If it doesn't find those components, then something is likely wrong.



Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.



Then there's spatial coherence, or knowledge of the relative positioning of things. If an object detector senses people floating in midair, for example, then that should be a red flag. And for all three of these strategies, the team hopes to not only teach object detectors to flag an attack but also correct it.

"It's definitely a good place to start," Chandrasekaran said of Intel and Georgia Tech's plan. He compared it to Cornell University research that used a deep forgery discriminator (a system that discriminates between real and fake items) to use human-like judgment to gauge whether or not an image of a face was fake.

Implications for the future

Chau's long-term priority is scalability, he said — making the team's research and techniques transparent by delivering open-source software. He acknowledged the risk involved in publicizing information that bad actors could use to their advantage, but argued it's even more important for people in charge of the technology to be informed. And AI systems are often a black box anyway.

"The reality is that sometimes these AI and machine learning techniques do not work — and more dangerously, sometimes it works this second, and the next second a completely different [outcome is] produced," Chau said. "It's important to know, when it works well, why does it work so well and, more importantly, when someone is really deliberately attacking it, intentionally, what is it really exploiting?"

In order for the team to counter threats, it's vital for them to proactively discover vulnerabilities that bad actors aren't yet aware of. If they don't, bad actors could end up with the tools to disassemble any new techniques they use.

"Because we're not convinced that we'll necessarily find the perfect defense, we're trying to advance the theory [and] figure out, 'What are the limits?'" Draper said. "We're going to try to defend them as best we can, make them as invulnerable as possible, but we also want to have enough of a theoretical background to develop the theory in such a way that we can tell people, when they're deploying an AI system, the extent to which it may be vulnerable or not."

And for the critical systems of tomorrow, that could make all the difference. "If you're doing a system whose job it is to recommend movies, maybe you're not that terrified of an attack — how much damage could an adversary do?" Draper said. "But if you're doing a self-driving car, then someone's life is on the line."

Power

How the creators of Spligate built gaming’s newest unicorn

1047 Games is now valued at $1.5 billion after three rounds of funding since May.

1047 Games' Splitgate amassed 13 million downloads when its beta launched in July.

Image: 1047 Games

The creators of Splitgate had a problem. Their new free-to-play video game, a take on the legendary arena shooter Halo with a teleportation twist borrowed from Valve's Portal, was gaining steam during its open beta period in July. But it was happening too quickly.

Splitgate was growing so fast and unexpectedly that the entire game was starting to break, as the servers supporting the game began to, figuratively speaking, melt down. The game went from fewer than 1,000 people playing it at any given moment in time to suddenly having tens of thousands of concurrent players. Then it grew to hundreds of thousands of players, all trying to log in and play at once across PlayStation, Xbox and PC.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

While it's easy to get lost in the operational and technical side of a transaction, it's important to remember the third component of a payment. That is, the human behind the screen.

Over the last two years, many retailers have seen the benefit of investing in new, flexible payments. Ones that reflect the changing lifestyles of younger spenders, who are increasingly holding onto their cash — despite reports to the contrary. This means it's more important than ever for merchants to take note of the latest payment innovations so they can tap into the savings of the COVID-19 generation.

Keep Reading Show less
Antoine Nougue,Checkout.com

Antoine Nougue is Head of Europe at Checkout.com. He works with ambitious enterprise businesses to help them scale and grow their operations through payment processing services. He is responsible for leading the European sales, customer success, engineering & implementation teams and is based out of London, U.K.

Protocol | Policy

Why Twitch’s 'hate raid' lawsuit isn’t just about Twitch

When is it OK for tech companies to unmask their anonymous users? And when should a violation of terms of service get someone sued?

The case Twitch is bringing against two hate raiders is hardly black and white.

Photo: Caspar Camille Rubin/Unsplash

It isn't hard to figure out who the bad guys are in Twitch's latest lawsuit against two of its users. On one side are two anonymous "hate raiders" who have been allegedly bombarding the gaming platform with abhorrent attacks on Black and LGBTQ+ users, using armies of bots to do it. On the other side is Twitch, a company that, for all the lumps it's taken for ignoring harassment on its platform, is finally standing up to protect its users against persistent violators whom it's been unable to stop any other way.

But the case Twitch is bringing against these hate raiders is hardly black and white. For starters, the plaintiff here isn't an aggrieved user suing another user for defamation on the platform. The plaintiff is the platform itself. Complicating matters more is the fact that, according to a spokesperson, at least part of Twitch's goal in the case is to "shed light on the identity of the individuals behind these attacks," raising complicated questions about when tech companies should be able to use the courts to unmask their own anonymous users and, just as critically, when they should be able to actually sue them for violating their speech policies.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Protocol | Workplace

Remote work is here to stay. Here are the cybersecurity risks.

Phishing and ransomware are on the rise. Is your remote workforce prepared?

Before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

Photo: Stefan Wermuth/Bloomberg via Getty Images

The delta variant continues to dash or delay return-to-work plans, but before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

So far in 2021, CrowdStrike has already observed over 1,400 "big game hunting" ransomware incidents and $180 million in ransom demands averaging over $5 million each. That's due in part to the "expanded attack surface that work-from-home creates," according to CTO Michael Sentonas.

Keep Reading Show less
Michelle Ma
Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.
Protocol | Fintech

When COVID rocked the insurance market, this startup saw opportunity

Ethos has outraised and outmarketed the competition in selling life insurance directly online — but there's still an $887 billion industry to transform.

Life insurance has been slow to change.

Image: courtneyk/Getty Images

Peter Colis cited a striking statistic that he said led him to launch a life insurance startup: One in twenty children will lose a parent before they turn 15.

"No one ever thinks that will happen to them, but that's the statistics," the co-CEO and co-founder of Ethos told Protocol. "If it's a breadwinning parent, the majority of those families will go bankrupt immediately, within three months. Life insurance elegantly solves this problem."

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories