Imagine a cybersecurity catastrophe like this one: A pharmaceuticals maker suffers a data breach, but no data is stolen and no ransomware is deployed. Instead the attacker simply makes a change to some of the data in a clinical trial — ultimately leading the company to release the wrong drug.
It's a hypothetical scenario, for now. Ransomware and the theft of sensitive data remain massive top-of-mind security concerns, of course, but at least there are tools and procedures available to mitigate those issues.
Data-tampering represents a different type of threat, and one that could be potentially even more serious for certain organizations, depending on the situation. And yet it's not on the radar for many businesses, experts told Protocol, due to the fact that few such attacks have occurred and come to light.
But this type of attack is not totally unprecedented. In early 2021, for instance, a hacker who broke into a Florida water treatment plant was able to elevate the sodium hydroxide, or lye, in the water to an unsafe level. (The modification was quickly caught by an operator.)
Will Ackerly, a former NSA security architect who invented a data-protection standard used by U.S. defense and intelligence agencies, is among those who believe that data manipulation is poised to become a burgeoning threat in coming years.
Compared with other threats to data security, the manipulation of data is probably the "most nefarious and hardest to detect," said Ackerly, who is now co-founder and CTO of data security startup Virtru. And on the attacker side of the equation, the fact remains that today, "there are a lot of adversaries looking to trick someone into thinking something that's not true," he said.
Another example is the growing use of deepfake audio and video in cyberattacks. A recent VMware study found that two-thirds of cyber incident responders investigated attacks that involved fabricated audio or video over the past year, up 13% from a year ago.
But as jarring as it is, the deepfake phenomenon is just one part of the larger threat that businesses are facing from manipulated data, experts told Protocol.
‘Nightmare scenario’
Lou Steinberg, who was the CTO of TD Ameritrade from 2011 to 2017, said he's spoken with numerous CISOs in industries from financial services to pharmaceuticals who are increasingly worried by the threat of data manipulation attacks, sometimes referred to as attacks on "data integrity."
In another example of this type of attack, a threat actor might corrupt a portion of a public company's data and then publicize this fact, leaving it unable to close its books at the end of the quarter, said Steinberg, who is now the founder of cybersecurity research lab CTM Insights.
"What happens when you can't trust your own data?" he said. "This is a nightmare scenario."
Such attacks have been warned about for years . And the fact that few have made headlines suggests they could be harder to pull off than it might seem.
But the fact remains that both the technology and the awareness needed to combat data manipulation threats are not where they need to be, experts said.
Technologies for protecting against data integrity attacks include file integrity monitoring services for detecting file changes, which can be used in combination with logging and backups to secure against such threats from external attackers or malicious insiders, the National Institute of Standards and Technology noted in a 2020 report .
But such an approach won't necessarily detect data changes by someone who appears to be an authorized user, because they're using stolen credentials, Steinberg said, or because they're a malicious insider.
"What happens when you can't trust your own data? This is a nightmare scenario."
The second issue is whether the speed at which modern data is collected and overwritten would actually make it practical to recover the untainted version of the data in question, he said. For files that change constantly, "a rollback can create more damage than the attack," Steinberg said.
Most businesses are also preoccupied with other data security issues, such as protecting the confidentiality of their data, said Heidi Shey, a principal analyst at Forrester.
"I think something like data integrity protection is so much further down the list for many people," Shey said. "There's a lot of other priorities that just are louder, and demand more of their attention."
Still, "I'd say it's a topic that is worth companies taking a closer look at," she said. While data manipulation may only constitute a "simmering" threat at this point, "we know that the potential consequences could be pretty major for this type of attack," Shey said.
Believable fakes
The threat isn't limited to changes in data values either: Thanks to the same AI-powered technology that's used to create deepfake videos, researchers say the threat of manipulated images, such as medical scans, is growing as well.
Image fakery is of course nothing new, and in recent years, a number of military disinformation efforts have embraced the tactic. But the strategic insertion of an altered image in place of the original could be much harder to spot.
A study published in 2019 by researchers at Ben-Gurion University found that CT scans, which they manipulated with the help of AI, were consistently able to trick radiologists into misdiagnosing lung conditions.
"If you’re not looking for the threat, you pretty much fall for it every time."
"If you’re not looking for the threat, you pretty much fall for it every time," said Yisroel Mirsky, who led the study and is head of the university's Offensive AI Research Lab . The experiments also found that even after the radiologists were told that some images had been faked, they were still fooled 60% of the time.
The research was intended to illustrate a larger threat — that "an attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder," the researchers wrote in their paper on the study.
Notably, deepfake image generation technology has advanced significantly since the study was conducted, Mirsky told Protocol. "Every few months it's getting better — higher resolution, higher fidelity," he said.
Attacks on machine learning
One type of data manipulation attack that has received comparatively more attention is what's known as "adversarial machine learning," in which an attacker attempts to dupe an ML model with false data during its training phase.
While the motives for doing this can vary, the result is that the ML model won't perform properly. The case of Microsoft's short-lived Twitter chatbot, Tay , is one infamous example of adversarial ML — but there are many documented cases of successful data-poisoning attacks on ML models, both by threat actors and researchers.
Those types of attacks usually don’t result in an actual data breach, however. The attackers have instead managed to influence the ML models from the outside. But that doesn't mean that the data stores that inform key ML models don't represent a ripe target for a motivated hacker, said Lisa O'Connor, managing director for Accenture Security and head of security R&D at Accenture Labs.
And given the world's growing reliance on algorithms, adversarial ML threats are a serious concern, O'Connor said. "The stakes are very high for protecting that ecosystem," she said, pointing to efforts such as the MITRE ATLAS initiative that aim to protect against threats to ML models.
The bottom line is that — regardless of the data source in question — it's clear in today's digital threat landscape that "seeing is not believing anymore," said Carey O'Connor Kolaja, CEO at identity verification vendor AU10TIX.
"There's been a shift in how our society is making decisions and the type of information we're making decisions on — whether it's an enterprise or the government or an individual," she said. "And that information can easily be manipulated."