Fintech

Banks watch your every move online. Here’s how it prevents fraud.

How fast you type or move your mouse can help banks verify your identity. But the idea that financial institutions are monitoring customers this closely has drawn online backlash and regulatory scrutiny.

Person holds credit card in front of laptop

Banks can detect your mouse movements — and use that information to fight fraud.

Photo: Pickawood/Unsplash

In April, systems at the National Australia Bank watched as a customer tried to raise her account transaction limit from $20,000 to $100,000. She logged in with the right username and password and seemed legit, but recently-installed software detected that her behavior was significantly different from previous sessions.

"The way she was using her mouse looked different," Chris Sheehan, a National Australia Bank investigations manager, told Protocol. "The number of clicks on the mouse looked different. Her cutting and pasting details looked different."

The deviations were picked up by the bank's new BioCatch software, which led the bank's anti-fraud team to figure out that the customer was in trouble. She was on her cell phone with a fraudster and was stressed by the account changes he was coaching her to make. The team quickly called her landline to warn her and she put things to a stop.

The rapid rise of online banking has led financial institutions to embrace behavioral analytics, AI-powered technology designed to flag potential fraud based on a user's actions and even their mannerisms on a website. That trend has accelerated during the pandemic, though the widening adoption of the technology is coinciding with heightened worries about the use of AI in financial services.

This was highlighted last week when insurance company Lemonade faced backlash for bragging in a Twitter thread — which it deleted and apologized for — about its AI, suggesting that it's been able to boost profits by using AI to deny customer claims based on "non-verbal cues."

Watching for fraud

A behavioral analytics system records clients' website interactions and uses the data to keep track of them and verify who they are when they're online. Web browsers have long provided information about mouse movements and typing, though few internet users realize they're being surveilled in such fine-grained detail.

The technology is also used to flag user behavior that indicates an attempt to open fraudulent accounts using stolen identities.

Behavioral analytics provides another security layer on top of other forms of identification, said BioCatch CEO Howard Edelstein. In fact, the technology can be an even more powerful security tool at a time when Social Security numbers and other personal information — like your mother's maiden name or what street you grew up on — are now also vulnerable to hacking and theft.

"Every time they ask for knowledge-based authentications, I kind of laugh because everyone's stolen every piece of information," he told Protocol. "You mean you can't find out your dog's name or whatever? … You can get around deterministic things. You can steal them. Somebody's got it."

But behavior is tougher to steal or replicate with precision, he added: "The machines learn you, and they follow you. If you start changing your behavior too rapidly, it's an alert."

Logan Allin, managing general partner at Fin VC, which is an investor in Neuro-ID, another behavioral analytics startup, echoed this point: "Digital footprints don't change, and imitating or faking human behavior is next to impossible."

Jack Alton, CEO of Neuro-ID, cited examples of legitimate customers who would type in their information, such as their Social Security number, based on "long-term memory." On the other hand, a criminal armed with a list of stolen information would do it differently. "If I stole your credentials, and I tried to log in, even if I got your name and your password right, I'm going to do it at a different cadence," he told Protocol.

Anton Klippmark, a product manager at BehavioSec, said the company's technology could distinguish between a legitimate customer or a fraudster based on the user's familiarity with a specific task. A legitimate user would type a name and Social Security number with ease, but may take a bit more time filling out a bank routing number. A fraudster working with stolen information "can be the exact opposite."

"You're very familiar with the form, and with things most people don't know, like a routing number," he told Protocol. "That's stuff that a lot of fraudsters or criminals would know very well because that's their working environment."

Edelstein of BioCatch said the company's technology can even make an educated guess about a user's age based on how fast the person types, clicks on a mouse and moves the cursor on the screen. "Every year that you age, you slow down by about 12 to 15 milliseconds," he said. "If you're a 55-year-old card owner and you're typing like a 28- to 34-year-old, that's a fraud flag."

The BioCatch system also uses "invisible challenges." For example, the software collects data on how a legitimate customer would typically move a mouse whenever the cursor disappears. When there's suspicion of fraud, the software would intentionally make the cursor vanish and verify if the user is legit based on the person's reaction. "You might curse Microsoft for making the cursor disappear, but it's more likely than not [that it's] not Microsoft to begin with," Edelstein said, laughing.

Pandemic push

The benefits of behavioral analytics have been highlighted in the pandemic, when banks were forced to close branches and conduct business mainly online, and when there was also a dramatic spike in fraud cases. In the United States alone, more than 1.4 million cases of identity theft were reported in 2020, about double what was reported in 2019, according to the Federal Trade Commission.

But the growing adoption of behavioral analytics is also happening in a time of mounting concerns about the use of AI and big data technology in financial services. In April, five U.S. agencies, led by the Federal Reserve Board and the Consumer Financial Protection Bureau, announced that they were soliciting insights into the way banks and tech companies were using AI in financial services.

Privacy is a major worry. Companies like BioCatch, Neuro-ID and BehavioSec are required to comply with strict privacy laws in the U.S., Europe and other countries.

Edelstein of BioCatch said the company does not record detailed information about a customer: "We get a hash ID from a bank, and we basically make sure that whenever that hash ID is online, that his behavior is consistent with that ID."

Sheehan of the National Australia Bank said its customers always "know what we're doing," adding that privacy "is a legitimate concern and it's something we have to always pay attention to." The bank's privacy policy mentions collecting "information about how you interact with us when you use internet or mobile banking (such as information about how you use your devices)" but doesn't describe the detailed tracking that allowed it to catch the fraudster in the April case.

Guarding against bias

Bias in AI systems is another concern, underlined by the uproar over Lemonade's description of its automated claims process in which customers are required to "record a video on their phone and explain what happened."

"Our AI carefully analyzes these videos for signs of fraud" and is able to "pick up non-verbal cues that traditional insurers can't, since they don't use a digital claims process," the company wrote in a now-deleted tweet. This system "ultimately helps us lower our loss ratios (aka how much we pay out in claims vs. how much we take in) and our overall operating costs," it added.

Responding to the backlash, Lemonade subsequently acknowledged that "a poorly worded tweet of ours (mostly the term 'non-verbal cues') led to confusion." The company stressed: "We do not use, and we're not trying to build, AI that uses physical or personal features to deny claims."

Thomas Lee, an assistant professor at University of California, Berkeley's Haas School of Business, said using "tools to monitor and measure human behavior" is a widely-accepted practice in designing software products and services. But "we need to be judicious" about using such technologies for identifying and authenticating individuals, he said, especially when it involves "high-stakes decision-making."

Sheehan of the National Australia Bank said that behavioral analytics is a compelling tool in banking at a time when fraud is increasingly conducted by sophisticated, fast-moving global operations. Huge amounts of money can now be stolen from banks in a matter of hours and traditional anti-fraud tactics are often too slow to stop the fraud.

"While we might detect the scam, the money has often already left," Sheehan said.

Ray Wang, an analyst with Constellation Research, said behavioral analytics can also give banks and fintechs an edge by providing "decision velocity" — for example, by flagging fraud in real time "if I suddenly use my credit card for venues I normally don't use."

BioCatch has attracted the attention of top investors, including Bain Capital which put in $100 million of the company's $145 million series C round last April. The company has raised a total of $215 million.

The company's software is being used at more than 50 major global banks and financial institutions, including Barclays, HSBC and American Express. BioCatch "has led to more robust fraud detection during the online application process in the U.S." according to an American Express spokesperson.

Sheehan said BioCatch, whose software the bank began implementing in stages in late 2019, has helped the National Australia Bank save roughly $450,000 a month that it otherwise "would have lost to fraud."

The costs of using BioCatch can range from "a few hundred thousand dollars for a smaller bank to the low seven figures for a larger bank" that's using more capabilities, a BioCatch spokesman said.

The case of the National Australia Bank customer underscores the speed and brazenness of today's bank fraud operations, Sheehan said. The fraudster had managed to deceive the client into thinking that he was helping her resolve problems with her internet service and online banking connection.

"She was being coached and she was unsure about what she was being asked to do, and it was causing her some agitation," Sheehan said.

Even so, it was a close call. The BioCatch system assigns a score for every online session from zero to 1,000, with a score of over 900 flagged as "a high-risk event," Sheehan said. The customer had a score of 910. When a team member reached her on her landline, the bank rep could hear her talking to the fraudster on her cell phone, Sheehan said.

"He was telling her to ignore us and that she can't trust the bank," he said. "Fortunately, she did trust us."

Fintech

Crypto is crumbling, and DeFi hacks are getting worse

The amount of crypto stolen in the first quarter of 2022 has already surpassed criminal hackers’ 2021 haul. There aren’t any easy fixes.

The biggest hacks of 2022 were carried out by attackers spotting vulnerabilities in smart contracts and protocols, especially in cross-chain bridges and flash loan protocols.

Illustration: Christopher T. Fong/Protocol

Until recently, DeFi seemed like it was on an exponential trajectory upwards. With the collective value of crypto peaking near $3 trillion, hackers saw a big opportunity. The only thing that may slow them down is the precipitous drop in the value of the tokens they’re going after.

DeFi hacks have been getting worse and worse, with no clear solutions in sight. According to a recent report by blockchain security firm PeckShield, the amount of money netted from DeFi hacks in the first four months of 2022, $1.57 billion, has already surpassed the amount netted in all of 2021, $1.55 billion. A report by Chainalysis found a similar trend, with the hacker haul in the first three months of 2022 exceeding a record set in the third quarter of 2021.

Keep Reading Show less
Lindsey Choo
Lindsey Choo is a San Francisco-based reporter covering fintech. She is a graduate of UC San Diego, where she double majored in communications and political science. She has previously covered healthcare issues for the Center for Healthy Aging and was a senior staff writer for The UCSD Guardian. She can be reached at lchoo@protocol.com.
Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Policy

Privacy by Design laws will kill your data pipelines

The legislation could make old data pipelines more trouble than they’re worth.

Data pipelines have become so unwieldy that companies might not even know if they are complying with regulations.

Image: Andriy Onufriyenko/Getty Images

A car is totaled when the cost to repair it exceeds its total value. By that logic, Privacy by Design legislation could soon be totaling data pipelines at some of the most powerful tech companies.

Those pipelines were developed well before the advent of more robust user privacy laws, such as the European Union’s GDPR (2018) and the California Consumer Privacy Act (2020). Their foundational architectures were therefore designed without certain privacy-preserving principals in mind, including k-anonymity and differential privacy.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Enterprise

Why AI-powered ransomware could be 'terrifying'

Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.

Ransomware gangs don’t have AI ransomware. At least not yet.

Photo: Max Duzij/Unsplash

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn't been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.

Podcasts

Crypto’s big crash

Is the tech superbubble about to burst?

red and blue light streaks
Photo by Maxim Hopman on Unsplash

This week, we're diving into the crypto crash. What led luna to fall off a cliff? Are we seeing the dot-com bust, part two? Protocol fintech editor Owen Thomas explains it all to us. Then entertainment reporter Janko Roettgers joins us to share the inside scoop on his exclusive interview with Mark Zuckerberg. We learn why Meta is betting it all on the metaverse and Brian finally gets to ask the most pressing question on his mind this week: What does Mark smell like?

And finally, Caitlin and Brian take a moment to reminisce about the iPod, which was put out to pasture this week after more than two decades on the market.

Keep Reading Show less
Caitlin McGarry

Caitlin McGarry is the news editor at Protocol.

Latest Stories
Bulletins