Security teams are skeptical of AI. Attack prevention products could change that.

If new technology for AI-driven attack prediction and prevention lives up to its promise, it could enable major improvements for cyber defense. Darktrace just released new products that will test the theory.

Cybersecurity AI

While many security professionals have grown numb to the claims of what AI and ML could do thanks to years of hype, attack prevention is a new use for machine intelligence in cyber defense.

Photo: Darktrace

With products for AI-powered attack prevention moving into commercialization, the next wave of artificial intelligence and machine learning for security is starting to take shape.

If the technology lives up to its promise, it could enable major gains on cyber defense and help repair the poor image that AI/ML has among many cybersecurity professionals, experts told Protocol.

On Tuesday, Darktrace unveiled its first products for AI-driven cyberattack prevention. The company said that its new Prevent product family will be generally available on Aug. 1, joining its existing portfolio of attack detection and response offerings.

By using AI to model and emulate potential attacks, "you've just created a 'digital bad guy'" to help inform cyber defense efforts, said Mark Driver, a research vice president at Gartner.

For defenders, the problem right now is that the volume and velocity of attacks coming in are simply overwhelming, Driver said. "The only way to deal with this is to start removing those attack vectors proactively."

And AI/ML is the most scalable way to do so. We're still two or three years from AI-powered attack prevention becoming a "should-have" for businesses, but "it's going to be absolutely huge” over the longer term, Driver said.

While many security professionals have grown numb to the claims of what AI and ML could do thanks to years of hype, attack prevention is a new use for machine intelligence in cyber defense.

With this approach, "you are using AI to be a bit of a crystal ball," said Nicole Eagan, chief strategy officer and AI officer at Darktrace, who previously served as CEO of the company from 2014 to 2020. "It's very different than using AI [for] reacting to something that's happening."

Darktrace, which was founded in 2013, has a track record for spotting where AI/ML could be applied in cybersecurity. The company was early to the idea that the technologies could be used for improved detection of cyberattacks by sifting through reams of data. Today, AI/ML is ubiquitous in detection tools.

After detection, Darktrace expanded into using its self-learning AI for responding to malicious activities once they've been detected.

Now, the company is looking to enable cybersecurity teams to get out ahead of the threats. Darktrace unveiled two new products that will make up its Prevent product family — an attack surface management product for locking down external assets and an "end-to-end" product that can be used to harden all of a customer's environments, both internal and external, against cyberattacks.

At the core of the Darktrace Prevent technology is the application of AI/ML to what's known as "attack path modeling." The idea is to use AI to map out all of the paths that an attacker might take to find the most valuable or sensitive data in an organization's IT systems.

The technology can then prioritize which paths the organization should focus its energies on blocking. It can also feed that information back into the Darktrace detection and response engine to pay especially careful attention to those critical pathways, according to the company.

"By hardening the environment, it's a deterrent. That makes it harder for an attacker to get in, and more expensive," said Justin Fier, vice president of tactical risk and response at Darktrace. "That means there's a good chance they might just go on to the next target, because there's lower-hanging fruit there."

Attack emulation

Crucially, rather than just simulating attacks, Darktrace's technology is capable of emulating real-world attacks using a customer's actual data and IT systems, according to the company. That's important because the only way to really know if existing countermeasures would catch an attack is to run a test with the organization's actual people, Eagan said.

"The AI will actually insert itself in a [Microsoft] Teams or email conversation you're having with a relevant party. And it will get the tone right. You cannot discern that it's an attack," she said. "It uses the self-learning AI to create an extremely effective attack on a person so we can test the countermeasures."

Initial customers that've been using the Darktrace Prevent product include the City of Las Vegas, which said in a news release that along with monitoring of cyber risk, the technology also enables continuous penetration testing. A number of customers have cited the ability to do pen testing on a continual basis — rather than just the occasional penetration tests that can be run by staff — as a key use case for the Prevent technology, Eagan said.

Another vendor that's been at the forefront of deploying AI for cyber defense is Deep Instinct, which has stood out with its focus on preventative security.

The solution that you're buying should block the next attack.

With a deep-learning algorithm that aims to mimic how a human brain thinks, Deep Instinct claims its technology can predict attacks before they happen — even for types of attacks that haven't been seen before, according to the company.

"We are not relying on any human analysis in order to train the machine on what the next attack is going to look like," said Nadav Maman, co-founder and CTO of Deep Instinct. "The solution that you're buying should block the next attack."

Using AI/ML to achieve a more proactive stance in security is "where we need to see more of an industry shift," said Lisa O'Connor, managing director for Accenture Security and head of security R&D at Accenture Labs.

Researchers at Accenture Labs, for instance, have been exploring how virtual replicas known as "digital twins" can be used to reveal potential attack paths. AI/ML can be used to interrogate the digital twin model that's been created; for instance, a security team might ask the model to reveal the next likely path to be used by an attacker, or the fastest path to the most critical data in the IT system, O'Connor said.

"From a defensive perspective, it's about, where do I cut off access to have the least business impact?" she said.

Accenture's cyber digital twins technology is ready to pilot now with customers, according to the company. Ultimately, using AI/ML for preventative cyber defense "could be transformative," O'Connor said. "I absolutely think it's coming."

AI exhaustion

It can't be ignored, though, that any new wave of AI/ML for security will have to confront the weariness that many cybersecurity teams have with artificial intelligence.

"For the majority of security professionals, they are sick of hearing it," said Allie Mellen, senior analyst at Forrester.

Even though there are some strong use cases for AI/ML in security, any new advances in the area will face "quite a bit of skepticism and questions on whether it's actually going to be able to add value," Mellen said.

Could this emerging wave of preventative AI/ML for security make any difference in the perception? Mellen said it will depend on how the security tool makers describe what their technology can do, and whether they deliver on their promises.

"I think some vendors are able to communicate what machine learning is doing for them in a way that's trustworthy," she said. Typically, that involves sharing some of the technical details and explaining what's actually happening.

"Without that transparency and clarity, it can be very difficult for practitioners to trust whether or not machine learning is actually being used, or whether it's being used in a fruitful way," Mellen said.

Gartner's Driver agreed that AI/ML for security has suffered from being overhyped and promoted as a silver bullet. (Of course, that’s not something unique to the cybersecurity market, he noted.)

But if security teams do want to be able to move to a more preventative stance, there's no way to do that effectively without using these technologies, Driver said. While a lot of the modeling for attacks is out there today, it can often take weeks to pull together manually, he said.

"The AI is going to give it to us much, much faster, and much more surgically accurate," Driver said.

And it's likely that, in time, security teams will begin to see that it's "many times more valuable" to deal with events before they happen, rather than after, he said. Driver expects that AI-powered security tools for cyberattack prevention will become a "must-have" for businesses within three to five years.

With so many cybersecurity professionals "burnt out and tired of firefighting," the arrival of such tools could make a tangible difference for security teams, Eagan said.

And maybe, they could even help improve the image of AI/ML in the cybersecurity community.

"I do think it has the promise of shifting the way we look at cybersecurity," Eagan said, "and hopefully making it a lot less stressful for the security teams who've been living in the trenches."

Note: Protocol is owned by Axel Springer, in which KKR has a large minority stake; KKR has invested in Darktrace.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories