February 10, 2022
Synthetic fraud, item-not-received fraud and Telegram fraud are on the list, members of Protocol's Braintrust say.
Good afternoon! The world of fraud is a constantly evolving game of cat and mouse, and the tactics for bad actors are getting more sophisticated. With today's Braintrust, we asked the experts to zero in on the type of fraud they thought should be getting more attention. Questions or comments? Send us a note at email@example.com
Executive vice president & general manager of Identity, Fraud & DataLabs at Experian
We’ve seen a tremendous number of fake accounts being created across industries over the past year. With the rush to go to digital, fraudsters have hidden in the crowds of new accounts, using stolen or synthetic IDs.
The challenge is that, unless these are credit products (and most are not), there won’t be a loss or ID theft reported on these accounts. But the fraudsters use them to move money around. Essentially, they are proactively creating “non-threatening” accounts that may sit dormant for a while, until the fraudsters execute their financial takedown/cash out schemes. These may be DDA accounts (checking/savings), crypto, fintech, sharing economy accounts, anything that might be connected to a payment instrument or bank account can be a useful tool that lurks dormant, waiting to be used.
The other thing to be aware of is what we believe will be the next wave of attacks on existing accounts. All those online accounts that legitimate users (some of whom aren’t comfortable online, but were forced online during COVID-19), will be targets of fraudsters as they shift from creating new accounts to taking over accounts that consumers have set up.
All this drives the need for a comprehensive set of solutions that can tie together the full consumer journey for fraud risk mitigation, providing businesses with consistent visibility from the onboarding process through to every consumer interaction. In doing so, businesses can improve the customer experience at every moment of the journey, securing them and delighting them along the way.
CTO at Riskified
As ecommerce has boomed over the last two years, refund abuse has soared. With merchants easing their return policies to streamline the at-home shopping experience, fraudsters have gone from gaming refund policies to operating sophisticated scams, including false item-not received claims (INR) and empty box returns. Merchants are feeling the heat — the costs of tackling refund claims have doubled in the last two years and will only continue to grow.
The dilemma facing executives is that tightening refund policies risks angering loyal customers. In fact, a generous refund policy is an important offering in order to maintain customer loyalty. With shipping carriers overwhelmed, repeat customers will likely encounter an instance where their package is damaged or goes missing. In those cases, issuing a swift refund is the best resolution for the problem and will strengthen customer loyalty in the long term. Riskified estimates that 90% of refunds are legitimate, but that still leaves 10% of false claims that cause significant loss for merchants.
The key for executives is to have a fraud prevention solution that disincentivizes the worst abuses of store policy without causing unnecessary friction for customers. This is where machine learning can help. ML-based solutions can analyze transaction details to identify the individual behind it with a high degree of accuracy. Retailers can then enforce their policies in a way that makes sense for them — discouraging or imposing penalties for serial abusers, while reducing friction to attract and retain customers. With these measures in place, merchants can reap the benefits of store policies without losing out to its worst abuses.
President & CEO at Sift
The recent Dark Web crackdowns have caused fraudsters to turn to the Deep Web — a part of the internet not indexed by search engines — to do their dirty work. Specifically, secure messaging apps like Telegram have emerged as a haven for criminals to wreak havoc and turn a profit. The apps’ privacy-focused features and strong encryption allow fraudsters to remain undetected, and cybercriminals are increasingly turning to these messaging forums to promote “fraud as a service” capabilities.
Over the past 18 months, we’ve seen a notable increase in fraudsters using Telegram to execute fraud schemes targeting high-growth markets, such as "buy now, pay later" or food delivery services. In fact, earlier this year we saw a rapid proliferation of scammers advertising fake credit cards and compromised credentials to commit BNPL scams, such as signing up for fraudulent BNPL accounts to make illegitimate purchases. The challenge for executives is that it’s nearly impossible to shut down this type of fraud on messaging apps themselves. To effectively detect and prevent fraud at scale, companies should leverage machine learning to stop the suspicious activity and fraud attacks before they occur, all while helping legitimate customers fly through the checkout process.
Head of Credit & Fraud at Nearside
Fraudsters have historically exploited America’s bank-to-bank payment transfer network. However, the scheme of moving money from one account to another and then withdrawing the same funds while the transfer is still in progress is becoming more and more prolific. This is due to two major reasons: 1) online banking has become more common and widespread and 2) financial institutions are speeding up the money movement/settlement process for a better customer experience. This means more opportunities for fraudsters to abuse. Because of this, fraudsters have a renewed interest in defrauding fintechs and specifically the bank-to-bank payment transfer network.
Another area of focus is the continual increase in automated fraud attacks. The velocity of fake and fraudulent accounts being created is faster than ever. These attacks are not only exponentially growing, but are also becoming more sophisticated. Tactics are rapidly evolving, and fraud signals are becoming harder to detect. Manual processes and outdated models are insufficient in mitigating this rising tide of automated fraud attacks.
At the end of the day, fraud is constantly morphing and new typologies are emerging on a daily basis. What works today may not work tomorrow. It is imperative that we 1) leverage technology and data when possible, 2) build systems and models that are agile and flexible and 3) stay up to date on emerging fraud trends by connecting and sharing data/insights with industry leaders to continue preventing and managing fraud risk.
Carey O'Connor Kolaja
CEO at AU10TIX
The fraud executives are not paying enough attention to the fraud they can’t see. With Synthetic fraud, up to 95% goes undetected by regular fraud models, as these actors behave, act and look like regular customers and they are looking for two things: credentials and personal information. Synthetic fraud attacks have significant consequences, and it’s not limited to adults. Cybercriminals are also going after students and children.
If you think about a child, their name, address and social security number are in the system. It's legitimate information. If an education system is breached, attackers can use their collected information to create IDs and apply for credit cards. There's no history of these individuals or these children. They will start to build up a credit history, and then this ID ends up becoming an excellent synthetic ID, and when a credit history is established, it can be used for other things.
The mismanagement of sensitive identity data leaves us increasingly vulnerable to fraud and current identity sharing mechanisms in place fall short of the identity protection we need. Third-party organizations often retain digital credentials that people share with them rather than using it for verification and then discarding it. That means every time you share a piece of yourself you enable others to capture that information and possibly misuse it one day.
Responsible identity management includes using only the personal data necessary to authenticate and authorize an individual or entity, and no more. It's a deceptively difficult problem.
CEO at Cognito (Now part of Plaid)
Synthetic identity fraud, where fraudsters use a mix of real and fake PII to obtain credit and defraud a financial institution. While most executives are aware of the problem, there’s a lack of industrywide best practices to combat this issue as more and more consumers are bringing additional aspects of their lives online. What’s more, it’s entirely possible that synthetic identity fraud is happening undetected.
The Federal Reserve cited losses at $6 billion per year, or roughly 20% of credit losses — and that was back in 2016. The number has almost certainly grown since then, though the data isn’t particularly reliable. This is in part because financial institutions don’t have a consistent way of reporting these losses — some count them as third-party fraud, and others as credit losses. It is difficult to assess the full extent of the synthetic ID fraud problem without consistent methods of reporting it.
The good news is that the technology exists to minimize the risk of synthetic identity fraud and can be done easily with a few steps. Executives can sometimes be reluctant to introduce any additional friction into the sign-up process, but these days we can lean on neural networks and other advanced technologies to detect fraud without adding a burden on new users. We can, for example, verify the authenticity of identity data, identity documents and the user’s liveness often in less than a minute.
Better fraud detection -- and safer consumer experiences -- are a win-win for everyone.
Simon Marchand, CFE, C.Adm.
Chief fraud prevention officer at Nuance Communications
Executives have a decent sense of most external threats to their enterprises. They do not, however, always have a grasp on internal ones. As more workplaces adopt hybrid or remote models, there is heightened risk of employees being targeted by fraudsters manipulating them to facilitate cyberattacks. With social disruption comes increased risks of employees committing fraud against their employer. Since they are unsupervised, employees have new opportunities they didn’t have working from an office and could be under increased financial burden brought on by the pandemic. These two factors contribute significantly to the increased risk of occupational fraud.
To help prevent this, employers must ensure they have appropriate safeguards in place that protect sensitive information. It’s important to remember that there are internal as well as external risks for an organization to protect against. For example, biometrics can be used to accurately authenticate account holders at a telco company and ensure that they are the only ones able to access their accounts. By authenticating people based on who they are, rather than what they know, organizations can protect against fraud in any channel, regardless of any new tactics fraudsters might use. Such technologies reduce the risk of social engineering and can be used to protect agents against impersonations of IT personnel, for example.
CEO at the Merchant Risk Council
Executives are not paying enough attention to account takeovers. Many companies have deployed two-factor authentication, a good deterrent for account takeovers, but have done very little to encourage their user base to enable the stronger form of authentication.
Account takeover is growing again, especially at online accounts.
Chief analytics officer at FICO
Fintechs have been the darlings of venture capitalists and now enjoy mainstream status among consumers of all ages. Caught up in this whirlwind growth, some executives — at fintechs and companies that accept fintech payments — have not paid enough attention to fraud rates, which are skyrocketing. Increased fraud is the collateral damage of fintechs’ drive to dominate the “alternative” payment space through aggressive customer acquisition. For example, PayPal recently reported that 4.5 million of its newer accounts were fake, with fraudsters taking advantage of its $5 and $10 account opening incentives at a massive scale.
While the rise in alternative payments is understood and is inevitable, fintechs’ focus on growing their customer base and payments volumes can result in weaker fraud measures than those used to protect traditional card transactions.
Identity fraud is on the rise and fraudsters are infinitely creative at finding ways to exploit payment systems and people. Banks, fintechs and companies that accept alternative payments need to ramp up their fraud defenses, acknowledging rising fraud rates and incorporating advanced unsupervised AI models designed to detect new fraudulent payments and scams.
Mary Ann Miller
VP of Client Experience and Cybercrime and Fraud Executive Adviser at Prove
Digital identity is an area that is affecting both public and private sector and not receiving the attention it should. Executives have a hard time believing that the inflow of new account applications are not from the recent marketing campaign, but in fact, an army of bad actors setting up new accounts to wreak havoc in their organization. Fraud leaders need to have a voice at the table and that conversation. The question of "What is our risk appetite for identity theft?" requires a new mindset of "Are we doing all we can to detect and prevent identity theft?"
I also think that the CEO of each company needs to spend time with their head of fraud for an hour a month, not to catch up on the fraud strategy initiatives but ask the question, "What are your concerns and how can I support you?"
Fraud never gets the attention until a large loss event occurs, then it is an emotional conversation, the fraud leaders need to have C-level influence directly so those events are not a surprise. Too many fraud leaders have the "I told you so" T-shirts: That is directly related to lack of execution of fraud investment and strategy until it's too late. Fraud leaders want to leave their thumbprint on an organization if they are not truly supported, that only hurts the business, customers and consumers.
See who's who in the Protocol Braintrust and browse every previous edition by category here (Updated Feb. 10, 2022).
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust