Enterprise

Should just anyone be given keys to the AI machine? Why low-code AI tools pose new risks.

The low-code trend has come to AI, but skeptics worry that gifting amateurs with Easy-Bake Ovens for machine-learning models is a recipe for disaster.

Should just anyone be given keys to the AI machine? Why low-code AI tools pose new risks.

The same things that make low- and no-code AI so appealing can pose problems.

Image: Boris SV/Moment/Getty Images

“No code. No joke.”

This is the promise made by enterprise AI company C3 AI in splashy web ads for its Ex Machina software. Its competitor Dataiku says its own low-code and no-code software “elevates” business experts to use AI. DataRobot calls customers using its no-code software to make AI-based apps “AI heroes.”

They’re among a growing group of tech companies declaring that the days of elitist AI are over. They say with software that requires little to no coding at all, even the lowly marketing associate — now the “citizen data scientist” — has the power to create and use data-fueled machine-learning algorithms. This, they say, is “democratizing AI.”

Low- and no-code AI tools rely on visual interfaces with drag-and-drop functions and drop-down menus for building machine-learning models. They can serve a variety of everyday business needs by reducing time spent performing repetitive, manual data-input tasks, generating invoices, predicting inventory demand or watching out for equipment failure. Business executives in analytics, operations or marketing teams at banks, retailers or energy companies use low- and no-code AI software to determine the likelihood of credit card fraud or reduce the number of customers switching to another service. Sometimes these AI tools simply automate processes that in the past would have required more manual labor using spreadsheets.

But even people who see the value in these tools worry that gifting amateurs with Easy-Bake AI Ovens is a recipe for risk.

The same things that make low- and no-code AI so appealing can pose problems, said Anthony Seraphim, vice president of Data Governance at Texas Mutual, who oversees data use inside the workers' compensation insurance company, including ensuring colleagues use the most appropriate data to produce accurate analytics reports.

“The good thing is, it creates a lot of flexibility and speed, but the bad thing is, it creates a lot of flexibility and speed,” he said. Business users “need some form of guardrails without slowing them down.”

IT and data security teams also should be aware of who’s using these technologies and how, said Michael Bargury, chief technology officer and co-founder of Zenity, a company that helps IT teams monitor use of applications by business users that might create data security risks. He said data privacy breaches can occur if non-experts connect data sets that should not be linked and use them to train models without safeguards in place.

“The business side wants to accelerate low-code/no-code and IT, and security feels like they’re losing control,” Bargury said.

Some AI practitioners themselves are leery of an onslaught of AI made by people who lack knowledge of standard processes for debugging as well as testing for quality control and reliability. They worry that people who aren’t trained on the nuances of how machine learning works could unintentionally unleash AI that makes discriminatory decisions. And some argue that low- and no-code AI tools do not produce the level of detail necessary to explain how those models decide in the first place.

Nitzan Mekel-Bobrov, eBay’s chief artificial intelligence officer, already has begun to plan for these new dangers. For now, he told Protocol, eBay allows the use of low- and no-code AI tools for what he calls “low-risk” purposes, “where there’s not really an opportunity for bias or privacy issues, etc., no fraud or cyber issues.” But the company will proceed with caution.

“We have to be very careful as we do this because we need to understand what’s being put into production in front of our customers, and as you scale that up, you need all of the instrumentation in place to be able to continuously monitor,” he said.

New AI users, new data worries

At a time when data scientists are difficult to find, companies providing low- and no-code AI say their systems fill a gap, allowing businesspeople to take advantage of AI without the need to hire highly sought-after and expensive data scientists.

“The low-code/no-code trend makes sense,” in part because “there is a talent shortage,” said Kasey Uhlenhuth, senior product manager at Databricks, where she helps train and build its AutoML tools and machine-learning models. Databricks bought no-code machine learning company 8080 Labs in October with a plan to integrate its capabilities with its existing AutoML tools so “anyone with just a very basic understanding of data science, can train advanced models on their datasets,” according to a company statement.

Making AI more user-friendly also widens the pool of customers AI tech providers can serve. SparkBeyond, which provides a platform for building machine-learning models and finding patterns in data, has found that “about 50%” of its customers using its tools in the past five years did not have data science-related roles, said Ed Janvrin, general manager of the company’s Discovery Platform business unit. The company wanted to create software that helped people who do not know typical machine-learning coding languages like Python or R use and build machine-learning models. “We wanted to expand our user base,” he said.

Many low- and no-code AI tools provide pre-made models that people can train and feed with whatever data sources they choose. That worries Matt Tarascio, the senior vice president leading Booz Allen Hamilton’s analytics and AI business in support of the U.S. Department of Defense.

“If you’re using low-code, no-code, you don’t really have a good sense of the quality of the ingredients coming in, and you don’t have a sense of the quality of the output either,” he said. While low- and no-code software have value for use in training or experimentation, “I just wouldn’t apply it in subject areas where the accuracy is paramount,” he said.

Because well-performing and accurate AI models depend on high-quality data, Seraphim said he wants to help ensure that when businesspeople at Texas Mutual use low- and no-code tools to create machine-learning models to help inform decisions, they do so with the appropriate data.

However, data restrictions are not always in place, or business teams might circumvent IT teams that are intended to protect against inappropriate data use, Bargury said. “They’re connecting two sources of data with AI in the middle, which makes it extremely difficult for a security professional to understand what’s going on,” he said, noting that business teams might not want IT involved at all when they procure or use low-code AI tools. “It’s not something that spins out of IT, and people don’t want to bring in somebody that they assume will make it slow."

You are just blinded by that veil of no-coding from the uglier stuff.

AI tools that don’t require code can also obscure important information about the data that feeds models once they’re in use, said a data scientist who requested anonymity because they did not have their employer's permission to speak on the record.

For example, if a data supply delivered through an API is cut off, an automated process might take over and replace those missing data values; this could alter the way the model was intended to operate, potentially producing faulty decisions based on the wrong information. If an automated system obscures the fact that a data feed is broken, and automatically fills data gaps, the data scientist said, “You are just blinded by that veil of no-coding from the uglier stuff. That can affect performance of your model.”

Trained AI practitioners also argue that low- and no-code AI tools produce models that are not adequately transparent about how they make decisions. “My worry with low-code and no-code platforms is that they hide all the details about model building from the practitioner and will most likely generate black-box AI systems,” said Krishna Gade, founder and CEO of Fiddler, which provides an AI monitoring platform.

The model transparency argument

Not so, say makers of low- and no-code AI software, many of whom contend that these tools actually produce AI models that are more transparent than the ones built manually by experienced AI engineers. Some systems automatically generate and archive the corresponding code that’s producing what non-coders see, for instance creating code that represents every click in a drop-down menu visualized in a user interface.

The automated machine-learning models built using Databricks software “are generating exactly the training code that a data scientist would have written to get the model,” said Uhlenhuth, who added that the code produced in digital environments called “notebooks” shows the steps taken to produce results, and includes information showing how important features are to models when making decisions.

AutoML software from Databricks also alerts developers when the system detects “class imbalance” if data imbalances might create discriminatory harms or negatively affect model accuracy. “Eventually we might be adding some kind of knob that says, ‘Hey, only use model types that are more explainable, and then it will restrict the set of machine-learning algorithms that are run,'” Uhlenhuth said.

Ed Abbo, president and chief technology officer at C3 AI, said things have changed since older low- and no-code tools produced “black box” models. C3 AI’s tools provide information that shows why a machine-learning model makes a particular prediction, such as when a model predicts that a piece of equipment is likely to fail. The system provides metrics to help users interpret and understand machine-learning results, notifying them if they’re using invalid data and, like Databricks, showing which features carry the most weight when the model makes predictions.

Ed Abbo, president and chief technology officer at C3 AI Photo: C3 AI

In some ways, the code automatically generated by low- and no-code AI might actually provide more illuminating information about how models were built than what data scientists typically create, said the data scientist who asked to remain anonymous. Often people building models from scratch do not show their work, they said, adding that typically, “You put your model up in the cloud without documenting the training parameters.”

Still, simply showing the code does not explain how models work, the data scientist said: “I would be careful with the model transparency argument."

“Remember, the training code of the model is not the model code. It will only tell the parameters like the number of layers in a neural network, feature engineering, etc. The model itself still remains a black box,” said Gade in an email. “It is hard to know how the model will make a prediction and that creates mistrust in how to use it and how to assure customers the AI products are making the right decisions.”

What happens to a model after it is deployed also requires special attention, said eBay’s Mekel-Bobrov. “As we allow teams across the company to use no-code or low-code, and any kind of AI development, we need to have the right requirements and processes in place for ongoing monitoring,” he said.

Setting parameters

Google Cloud’s AppSheet, a no-code platform for building applications that can help to automate business processes like automatically generating invoices or sending customer service emails, provides information about model accuracy but does not generate code showing how machine-learning models are built using the system, said Peter Dykstra, a Google product manager.

While explaining how no-and low-code models work is important, AppSheet does allow users to define who can access models or apps or specific data flowing through them. “If a solution is created from some particular training data with some type of [personally-identifiable information] in it, then they can ensure only certain users can access it,” Dykstra said.

C3 AI’s system also lets users set parameters for data access. “As I log in as a citizen data scientist, there are objects and services that I can use and see, and there are others that I can’t because I shouldn’t,” said Abbo.

Despite these precautionary measures, Fiddler’s Gade said low- and no-code AI tools in the wrong hands might lead to misuse. “If the practitioners are knowledgeable, they could take the models produced by the no-code platforms and stress test them thoroughly and monitor them to make sure they are working well. But given the easiness of these platforms where people can upload a CSV and generate a model with 90% accuracy, it might give this superpower to less knowledgeable folks who could misuse it accidentally,” Gade said.

C3 AI aims to educate so-called citizen data scientists to alleviate those concerns. The company has published training materials and offers a 30-day training and certification program for its no-code AI software. “It still requires education on the concept of what AI and machine learning are, and what you can do with it and what you can’t do with it,” said Abbo.

Andrew Ng, a well-known machine-learning researcher whose startup Landing AI helps manufacturers train customized AI models using its no- and low-code tools, recognizes the risks of handing people the keys to AI without education. As might be expected, he warned against preventing non-coders from enjoying the benefits of AI. “Letting more people use AI to democratize access, that seems like a great thing,” he said, but added, “It’s critical that empowering comes with appropriate guidance and norms.”

Policy

How 'Zuck Bucks' saved the 2020 election — and fueled the Big Lie

The true story of how Mark Zuckerberg and Priscilla Chan’s $419 million donation became the 2020 election’s most enduring conspiracy theory.

Mark Zuckerberg is smack in the center of one of the 2020 election’s multitudinous conspiracies.

Illustration: Mike McQuade; Photos: Getty Images

If Mark Zuckerberg could have imagined the worst possible outcome of his decision to insert himself into the 2020 election, it might have looked something like the scene that unfolded inside Mar-a-Lago on a steamy evening in early April.

There in a gilded ballroom-turned-theater, MAGA world icons including Kellyanne Conway, Corey Lewandowski, Hope Hicks and former president Donald Trump himself were gathered for the premiere of “Rigged: The Zuckerberg Funded Plot to Defeat Donald Trump.”

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Fintech

From frenzy to fear: Trading apps grapple with anxious investors

After riding the stock-trading wave last year, trading apps like Robinhood have disenchanted customers and jittery investors.

Retail stock trading is still an attractive business, as shown by the news that crypto exchange FTX is dipping its toes in the market by letting some U.S. customers trade stocks.

Photo: Lam Yik/Bloomberg via Getty Images

For a brief moment, last year’s GameStop craze made buying and selling stocks cool, even exciting, for a new generation of young investors. Now, that frenzy has turned to fear.

Robinhood CEO Vlad Tenev pointed to “a challenging macro environment” marked by rising prices and interest rates and a slumping market in a call with analysts explaining his company’s lackluster results. The downturn, he said, was something “most of our customers have never experienced in their lifetimes.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Enterprise

Broadcom is reportedly in talks to acquire VMware

It hasn't been long since it left the ownership of Dell Technologies.

Photo: Yichuan Cao/NurPhoto via Getty Images

Broadcom is said to be in discussions with VMware to buy the cloud computing company for as much as $50 billion.

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the executive editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

Podcasts

Should startups be scared?

Stock market turmoil is making VCs skittish. Could now be the best time to start a company?

Dark times could be ahead for startups.

Photo by Startaê Team on Unsplash

This week, we break down why Elon Musk is tweeting about the S&P 500's ESG rankings — and why he might be right to be mad. Then we discuss how tech companies are failing to prevent mass shootings, and why the new Texas social media law might make it more difficult for platforms to be proactive.

Then Protocol's Biz Carson, author of the weekly VC newsletter Pipeline, joins us to explain the state of venture capital amidst plunging stocks and declining revenues. Should founders start panicking? The answer might surprise you.

Keep Reading Show less
Caitlin McGarry

Caitlin McGarry is the news editor at Protocol.

Latest Stories
Bulletins