Should just anyone be given keys to the AI machine? Why low-code AI tools pose new risks.

The low-code trend has come to AI, but skeptics worry that gifting amateurs with Easy-Bake Ovens for machine-learning models is a recipe for disaster.

Should just anyone be given keys to the AI machine? Why low-code AI tools pose new risks.

The same things that make low- and no-code AI so appealing can pose problems.

Image: Boris SV/Moment/Getty Images

“No code. No joke.”

This is the promise made by enterprise AI company C3 AI in splashy web ads for its Ex Machina software. Its competitor Dataiku says its own low-code and no-code software “elevates” business experts to use AI. DataRobot calls customers using its no-code software to make AI-based apps “AI heroes.”

They’re among a growing group of tech companies declaring that the days of elitist AI are over. They say with software that requires little to no coding at all, even the lowly marketing associate — now the “citizen data scientist” — has the power to create and use data-fueled machine-learning algorithms. This, they say, is “democratizing AI.”

Low- and no-code AI tools rely on visual interfaces with drag-and-drop functions and drop-down menus for building machine-learning models. They can serve a variety of everyday business needs by reducing time spent performing repetitive, manual data-input tasks, generating invoices, predicting inventory demand or watching out for equipment failure. Business executives in analytics, operations or marketing teams at banks, retailers or energy companies use low- and no-code AI software to determine the likelihood of credit card fraud or reduce the number of customers switching to another service. Sometimes these AI tools simply automate processes that in the past would have required more manual labor using spreadsheets.

But even people who see the value in these tools worry that gifting amateurs with Easy-Bake AI Ovens is a recipe for risk.

The same things that make low- and no-code AI so appealing can pose problems, said Anthony Seraphim, vice president of Data Governance at Texas Mutual, who oversees data use inside the workers' compensation insurance company, including ensuring colleagues use the most appropriate data to produce accurate analytics reports.

“The good thing is, it creates a lot of flexibility and speed, but the bad thing is, it creates a lot of flexibility and speed,” he said. Business users “need some form of guardrails without slowing them down.”

IT and data security teams also should be aware of who’s using these technologies and how, said Michael Bargury, chief technology officer and co-founder of Zenity, a company that helps IT teams monitor use of applications by business users that might create data security risks. He said data privacy breaches can occur if non-experts connect data sets that should not be linked and use them to train models without safeguards in place.

“The business side wants to accelerate low-code/no-code and IT, and security feels like they’re losing control,” Bargury said.

Some AI practitioners themselves are leery of an onslaught of AI made by people who lack knowledge of standard processes for debugging as well as testing for quality control and reliability. They worry that people who aren’t trained on the nuances of how machine learning works could unintentionally unleash AI that makes discriminatory decisions. And some argue that low- and no-code AI tools do not produce the level of detail necessary to explain how those models decide in the first place.

Nitzan Mekel-Bobrov, eBay’s chief artificial intelligence officer, already has begun to plan for these new dangers. For now, he told Protocol, eBay allows the use of low- and no-code AI tools for what he calls “low-risk” purposes, “where there’s not really an opportunity for bias or privacy issues, etc., no fraud or cyber issues.” But the company will proceed with caution.

“We have to be very careful as we do this because we need to understand what’s being put into production in front of our customers, and as you scale that up, you need all of the instrumentation in place to be able to continuously monitor,” he said.

New AI users, new data worries

At a time when data scientists are difficult to find, companies providing low- and no-code AI say their systems fill a gap, allowing businesspeople to take advantage of AI without the need to hire highly sought-after and expensive data scientists.

“The low-code/no-code trend makes sense,” in part because “there is a talent shortage,” said Kasey Uhlenhuth, senior product manager at Databricks, where she helps train and build its AutoML tools and machine-learning models. Databricks bought no-code machine learning company 8080 Labs in October with a plan to integrate its capabilities with its existing AutoML tools so “anyone with just a very basic understanding of data science, can train advanced models on their datasets,” according to a company statement.

Making AI more user-friendly also widens the pool of customers AI tech providers can serve. SparkBeyond, which provides a platform for building machine-learning models and finding patterns in data, has found that “about 50%” of its customers using its tools in the past five years did not have data science-related roles, said Ed Janvrin, general manager of the company’s Discovery Platform business unit. The company wanted to create software that helped people who do not know typical machine-learning coding languages like Python or R use and build machine-learning models. “We wanted to expand our user base,” he said.

Many low- and no-code AI tools provide pre-made models that people can train and feed with whatever data sources they choose. That worries Matt Tarascio, the senior vice president leading Booz Allen Hamilton’s analytics and AI business in support of the U.S. Department of Defense.

“If you’re using low-code, no-code, you don’t really have a good sense of the quality of the ingredients coming in, and you don’t have a sense of the quality of the output either,” he said. While low- and no-code software have value for use in training or experimentation, “I just wouldn’t apply it in subject areas where the accuracy is paramount,” he said.

Because well-performing and accurate AI models depend on high-quality data, Seraphim said he wants to help ensure that when businesspeople at Texas Mutual use low- and no-code tools to create machine-learning models to help inform decisions, they do so with the appropriate data.

However, data restrictions are not always in place, or business teams might circumvent IT teams that are intended to protect against inappropriate data use, Bargury said. “They’re connecting two sources of data with AI in the middle, which makes it extremely difficult for a security professional to understand what’s going on,” he said, noting that business teams might not want IT involved at all when they procure or use low-code AI tools. “It’s not something that spins out of IT, and people don’t want to bring in somebody that they assume will make it slow."

You are just blinded by that veil of no-coding from the uglier stuff.

AI tools that don’t require code can also obscure important information about the data that feeds models once they’re in use, said a data scientist who requested anonymity because they did not have their employer's permission to speak on the record.

For example, if a data supply delivered through an API is cut off, an automated process might take over and replace those missing data values; this could alter the way the model was intended to operate, potentially producing faulty decisions based on the wrong information. If an automated system obscures the fact that a data feed is broken, and automatically fills data gaps, the data scientist said, “You are just blinded by that veil of no-coding from the uglier stuff. That can affect performance of your model.”

Trained AI practitioners also argue that low- and no-code AI tools produce models that are not adequately transparent about how they make decisions. “My worry with low-code and no-code platforms is that they hide all the details about model building from the practitioner and will most likely generate black-box AI systems,” said Krishna Gade, founder and CEO of Fiddler, which provides an AI monitoring platform.

The model transparency argument

Not so, say makers of low- and no-code AI software, many of whom contend that these tools actually produce AI models that are more transparent than the ones built manually by experienced AI engineers. Some systems automatically generate and archive the corresponding code that’s producing what non-coders see, for instance creating code that represents every click in a drop-down menu visualized in a user interface.

The automated machine-learning models built using Databricks software “are generating exactly the training code that a data scientist would have written to get the model,” said Uhlenhuth, who added that the code produced in digital environments called “notebooks” shows the steps taken to produce results, and includes information showing how important features are to models when making decisions.

AutoML software from Databricks also alerts developers when the system detects “class imbalance” if data imbalances might create discriminatory harms or negatively affect model accuracy. “Eventually we might be adding some kind of knob that says, ‘Hey, only use model types that are more explainable, and then it will restrict the set of machine-learning algorithms that are run,'” Uhlenhuth said.

Ed Abbo, president and chief technology officer at C3 AI, said things have changed since older low- and no-code tools produced “black box” models. C3 AI’s tools provide information that shows why a machine-learning model makes a particular prediction, such as when a model predicts that a piece of equipment is likely to fail. The system provides metrics to help users interpret and understand machine-learning results, notifying them if they’re using invalid data and, like Databricks, showing which features carry the most weight when the model makes predictions.

Ed Abbo, president and chief technology officer at C3 AI Photo: C3 AI

In some ways, the code automatically generated by low- and no-code AI might actually provide more illuminating information about how models were built than what data scientists typically create, said the data scientist who asked to remain anonymous. Often people building models from scratch do not show their work, they said, adding that typically, “You put your model up in the cloud without documenting the training parameters.”

Still, simply showing the code does not explain how models work, the data scientist said: “I would be careful with the model transparency argument."

“Remember, the training code of the model is not the model code. It will only tell the parameters like the number of layers in a neural network, feature engineering, etc. The model itself still remains a black box,” said Gade in an email. “It is hard to know how the model will make a prediction and that creates mistrust in how to use it and how to assure customers the AI products are making the right decisions.”

What happens to a model after it is deployed also requires special attention, said eBay’s Mekel-Bobrov. “As we allow teams across the company to use no-code or low-code, and any kind of AI development, we need to have the right requirements and processes in place for ongoing monitoring,” he said.

Setting parameters

Google Cloud’s AppSheet, a no-code platform for building applications that can help to automate business processes like automatically generating invoices or sending customer service emails, provides information about model accuracy but does not generate code showing how machine-learning models are built using the system, said Peter Dykstra, a Google product manager.

While explaining how no-and low-code models work is important, AppSheet does allow users to define who can access models or apps or specific data flowing through them. “If a solution is created from some particular training data with some type of [personally-identifiable information] in it, then they can ensure only certain users can access it,” Dykstra said.

C3 AI’s system also lets users set parameters for data access. “As I log in as a citizen data scientist, there are objects and services that I can use and see, and there are others that I can’t because I shouldn’t,” said Abbo.

Despite these precautionary measures, Fiddler’s Gade said low- and no-code AI tools in the wrong hands might lead to misuse. “If the practitioners are knowledgeable, they could take the models produced by the no-code platforms and stress test them thoroughly and monitor them to make sure they are working well. But given the easiness of these platforms where people can upload a CSV and generate a model with 90% accuracy, it might give this superpower to less knowledgeable folks who could misuse it accidentally,” Gade said.

C3 AI aims to educate so-called citizen data scientists to alleviate those concerns. The company has published training materials and offers a 30-day training and certification program for its no-code AI software. “It still requires education on the concept of what AI and machine learning are, and what you can do with it and what you can’t do with it,” said Abbo.

Andrew Ng, a well-known machine-learning researcher whose startup Landing AI helps manufacturers train customized AI models using its no- and low-code tools, recognizes the risks of handing people the keys to AI without education. As might be expected, he warned against preventing non-coders from enjoying the benefits of AI. “Letting more people use AI to democratize access, that seems like a great thing,” he said, but added, “It’s critical that empowering comes with appropriate guidance and norms.”


UiPath had a rocky few years. Rob Enslin wants to turn it around.

Protocol caught up with Enslin, named earlier this year as UiPath’s co-CEO, to discuss why he left Google Cloud, the untapped potential of robotic-process automation, and how he plans to lead alongside founder Daniel Dines.

Rob Enslin, UiPath's co-CEO, chats with Protocol about the company's future.

Photo: UiPath

UiPath has had a shaky history.

The company, which helps companies automate business processes, went public in 2021 at a valuation of more than $30 billion, but now the company’s market capitalization is only around $7 billion. To add insult to injury, UiPath laid off 5% of its staff in June and then lowered its full-year guidance for fiscal year 2023 just months later, tanking its stock by 15%.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

Figma’s chief product officer: We can do more with Adobe

Yuhki Yamashita thinks Figma might tackle video or 3D objects someday.

Figman CPO Yuhki Yamashita told Protocol about Adobe's acquisition of the company.

Photo: Figma

Figma CPO Yuhki Yamashita’s first design gig was at The Harvard Crimson, waiting for writers to file their stories so he could lay them out in Adobe InDesign. Given his interest in computer science, pursuing UX design became the clear move. He worked on Outlook at Microsoft, YouTube at Google, and user experience at Uber, where he was a very early user of Figma. In 2019, he became a VP of product at Figma; this past June, he became CPO.

“Design has been really near and dear to my heart, which is why when this opportunity came along to join Figma and rethink design, it was such an obvious opportunity,” Yamashita said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.


Microsoft lays out its climate advocacy goals

The tech giant has staked out exactly what kind of policies it will support to decarbonize the world and clean up the grid.

On Sept. 22, Microsoft — seen here, CEO Satya Nadella — published two briefs explaining what new climate policies it will advocate for.

Photo: Simon Dawson/Bloomberg via Getty Images

The tech industry has no shortage of climate goals, but they’ll be very hard to achieve without the help of sound public policy.

Microsoft published two new briefs on Sept. 22 explaining what policies it will advocate for in the realm of reducing carbon and cleaning up the grid. With policymakers in the U.S. and around the world beginning to weigh more stringent climate policies (or in the U.S.’s case, any serious climate policies at all), the briefs will offer a measuring stick for whether Microsoft is living up to its ideals.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.


The next generation of refrigerants is on the way

It’s never been cooler to reconsider the substances that keep us cool. Here’s what could replace super-polluting greenhouse gases in refrigerators and air conditioners.

It’s incumbent on refrigeration tech companies to not repeat past mistakes.

Photo: VCG via Getty Images

In a rare display of bipartisan climate action, the Senate ratified the Kigali Amendment last week. The U.S. joins 137 other nations in the global effort to curb the use of hydrofluorocarbons, or HFCs. Now the race is on to replace them for climate tech startups and traditional HVAC and refrigeration companies alike.

Most HFCs have a global warming potential (GWP) more than 1,000 times that of carbon dioxide — though some are as much as 14,800 times more potent — which makes reducing them a high priority to protect the climate. The treaty mandates that the U.S. and other industrialized nations decrease their use of HFCs to roughly 15% of 2012 levels by 2036.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories