Power

DARPA is pouring millions into a new AI defense program. Here are the companies leading the charge

Intel and Georgia Tech are spearheading Pentagon-backed efforts to defend against attacks that could trick tomorrow's self-driving cars, facial recognition and weapons detection software.

A machine learning algorithm being spoofed by an adversarial attack.

Is that a person or a stop sign? Intel and Georgia Tech are spearheading efforts against adversarial attacks that fool machine learning systems into making such mistakes.

Photo: Georgia Institute of Technology

The Pentagon is teaming up with some of the biggest names in tech to combat hacks designed to mess with the automated systems we'll rely on in the near future.

In February, DARPA issued a call for proposals for a new program. Like most DARPA projects, it had a fantastic acronym: Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD). It's a multimillion-dollar, four-year initiative that's aiming to create defenses for sensor-based artificial intelligence — think facial recognition programs, voice recognition tools, self-driving cars, weapon-detection software and more.

Today, Protocol can report that DARPA has selected 17 organizations to work on the GARD project, including Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, SRI International and IBM's Almaden Research Center. Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks.

Sensors that use AI computer vision algorithms can be fooled by what researchers refer to as adversarial attacks. These are basically any hack to the physical world that tricks a system into seeing something other than what's there.

In the nascent self-driving car industry, there's already concern about how these attacks could manifest. There could be stickers, effectively invisible to the human eye, that render a stop sign unrecognizable. The stickers could trick other cars into clearing the road for your own vehicle, changing a roadside sign to make a vehicle turn into a certain business's parking lot, or, in the future, even more nefarious acts.

"We certainly know from the current world that there are people who want to create mischief, and there are one-off bad actors," Bruce Draper, GARD program manager at DARPA, told Protocol. "We also know that there are larger threats out there, and if you imagine a city that in the future has many, many self-driving cars in it, then if you suddenly cause all those self-driving cars to misbehave, you could really tie up a city. You could have a major impact. An even more frightening scenario would be if the first responder vehicles were self-driving and you could disable the first responders."

The project is split among three groups. One set of organizations will be looking at the theoretical basis for adversarial attacks on AI, why they happen and how a system can be vulnerable. Another group will be building the defenses against these attacks, and the last set of teams will serve as evaluators. Every six months, they'll test the defenses others built by throwing a new attack scenario their way and looking at criteria like effectiveness and practicality.

Over the next four years, DARPA will check in with each organization to evaluate their progress. But this is a research project — at least for now, there's no hard list of deliverables those involved need to provide to the agency.



Protocol Cloud, your weekly guide to the future of enterprise computing. Sign up now.



Intel was chosen to lead the physical adversarial attacks aspect of the project, as DARPA saw promise in the company's experience in simulating external environments for self-driving cars. Intel acquired Mobileye, a vehicle computer-vision sensor company, for $15 billion in 2017. Some consider Intel to be a dark horse in the race to build autonomous vehicles. It's made other bets in AI recently, and it also shed its smartphone modem business — which was playing catch-up to others like Qualcomm as the company missed the world's computing needs shift to mobile. Whether it has enough to win future battles in AI remains to be seen, but it's sitting on a strong base now. The company beat expectations in its last earnings report, posting over $20 billion in revenue for the quarter — nearly $1 billion more than expected. (That was, of course, before the world tumbled into a pandemic.)

The compounding computing problem

Arun Chandrasekaran, vice president and analyst at Gartner, said his research firm has seen an uptick in generative adversarial network (or GAN)-generated malicious attacks. Although the sophistication of both attack and defense technology varies greatly, he said, overall, "the detection and response to it… is not as advanced as the attacks themselves."

We're still "right on the cusp" of this problem, Draper said. The use of algorithms in military and civilian settings has skyrocketed in recent years, from providing movie recommendations to managing the stock market to making decisions on the battlefield. "What you see now is that dam breaking," Draper said. "I have a hard time projecting forward five or 10 years and thinking of any major aspect of either military or civilian life that doesn't have AI embedded into it."

And the problem is compounding: As the machine-learning framework and training data available to vendors become more widely available to hackers, Chandrasekaran said he believes malicious adversarial attacks will increase. And as the number of attacks increase, so too will their effectiveness.

"Fundamentally, this is really about trying to evade the existing detection systems," Chandrasekaran said. "You have existing systems that rely on a combination of images — in some cases, this could be voice and voice recognition… The fact that you could recreate something that's eerily similar to your voice or your image means that in many cases, you can essentially bypass the accuracy of your existing prevention system." These existing mechanisms must "significantly evolve," he added, to ensure confidence in their security.

Intel's currently focusing on the future — plugging in vulnerability holes and getting ahead of the threats downstream. "An important thing to know about this particular topic is this isn't a today threat," Jason Martin, a senior staff research scientist at Intel Labs, said. But it's a rarity in research to be able to spend time worrying about tomorrow's problems. "It's a nice place to be; it's not a 'panic now' sort of scenario," he said. "It's a 'calmly do the research and come up with the mitigations.'"

Intel and Georgia Tech have partnered on adversarial attack research for years. One of their focuses has been the ease with which bad actors can trick an algorithm into thinking a bird is a bicycle, for example, or mislabeling a stop sign — just by changing a few pixels.

The research so far, led by Duen Horng "Polo" Chau, associate professor of computing at Georgia Tech, has landed on an especially relevant takeaway: If you can't make something invulnerable from an attack, then make it computationally infeasible. For example, in some cryptography systems, there's some probability of an attacker figuring out the code key by using up considerable computing resources, but it's so improbable that it approaches impossible. Martin wants to approach the defense of physical adversarial attacks in a similar way: "The hope is that the combination of techniques in the defensive realm will make the cost of constructing an adversarial example too expensive," he said.

Intel and Georgia Tech's plan of attack

Intel and Georgia Tech plan to use some of the largest open-source image datasets — ImageNet and Microsoft's Common Objects in Context, for example. Open-source data is a DARPA requirement, and they're prioritizing research reproducibility. But those datasets are public and widely used. It begs the question: How does this effort differ from past ones?

Chau argues their innovation lies in how they plan to use the data. The plan boils down to teaching AI "coherence" — helping it see things more holistically, or, in a way, exercise common sense. It's something humans typically are born with or learn early on — and it's extremely difficult, if near impossible, to recreate with technology. Intel and Georgia Tech aim to provide three concrete solutions for a nebulous issue: temporal, semantic and spatial coherence.

Temporal coherence here relates to understanding of physics — things don't typically suddenly appear or disappear out of nowhere. For example, if a self-driving car registers a human, a stop sign or another object flickering into its view and then vanishing, then a hacker could be tampering with its system.

Semantic coherence relates to meaning. Humans identify things as a sum of their parts — a bird comprises eyes, wings and a beak, for example. The research team's plan is to incorporate a second line of defense into a sensing system — if it registers a bicycle, then it should next check for the wheel, handlebar and pedals. If it doesn't find those components, then something is likely wrong.



Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.



Then there's spatial coherence, or knowledge of the relative positioning of things. If an object detector senses people floating in midair, for example, then that should be a red flag. And for all three of these strategies, the team hopes to not only teach object detectors to flag an attack but also correct it.

"It's definitely a good place to start," Chandrasekaran said of Intel and Georgia Tech's plan. He compared it to Cornell University research that used a deep forgery discriminator (a system that discriminates between real and fake items) to use human-like judgment to gauge whether or not an image of a face was fake.

Implications for the future

Chau's long-term priority is scalability, he said — making the team's research and techniques transparent by delivering open-source software. He acknowledged the risk involved in publicizing information that bad actors could use to their advantage, but argued it's even more important for people in charge of the technology to be informed. And AI systems are often a black box anyway.

"The reality is that sometimes these AI and machine learning techniques do not work — and more dangerously, sometimes it works this second, and the next second a completely different [outcome is] produced," Chau said. "It's important to know, when it works well, why does it work so well and, more importantly, when someone is really deliberately attacking it, intentionally, what is it really exploiting?"

In order for the team to counter threats, it's vital for them to proactively discover vulnerabilities that bad actors aren't yet aware of. If they don't, bad actors could end up with the tools to disassemble any new techniques they use.

"Because we're not convinced that we'll necessarily find the perfect defense, we're trying to advance the theory [and] figure out, 'What are the limits?'" Draper said. "We're going to try to defend them as best we can, make them as invulnerable as possible, but we also want to have enough of a theoretical background to develop the theory in such a way that we can tell people, when they're deploying an AI system, the extent to which it may be vulnerable or not."

And for the critical systems of tomorrow, that could make all the difference. "If you're doing a system whose job it is to recommend movies, maybe you're not that terrified of an attack — how much damage could an adversary do?" Draper said. "But if you're doing a self-driving car, then someone's life is on the line."

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins