“No code. No joke.”
This is the promise made by enterprise AI company C3 AI in splashy web ads for its Ex Machina software. Its competitor Dataiku says its own low-code and no-code software “elevates” business experts to use AI. DataRobot calls customers using its no-code software to make AI-based apps “AI heroes.”
They’re among a growing group of tech companies declaring that the days of elitist AI are over. They say with software that requires little to no coding at all, even the lowly marketing associate — now the “citizen data scientist” — has the power to create and use data-fueled machine-learning algorithms. This, they say, is “democratizing AI.”
Low- and no-code AI tools rely on visual interfaces with drag-and-drop functions and drop-down menus for building machine-learning models. They can serve a variety of everyday business needs by reducing time spent performing repetitive, manual data-input tasks, generating invoices, predicting inventory demand or watching out for equipment failure. Business executives in analytics, operations or marketing teams at banks, retailers or energy companies use low- and no-code AI software to determine the likelihood of credit card fraud or reduce the number of customers switching to another service. Sometimes these AI tools simply automate processes that in the past would have required more manual labor using spreadsheets.
But even people who see the value in these tools worry that gifting amateurs with Easy-Bake AI Ovens is a recipe for risk.
The same things that make low- and no-code AI so appealing can pose problems, said Anthony Seraphim, vice president of Data Governance at Texas Mutual, who oversees data use inside the workers' compensation insurance company, including ensuring colleagues use the most appropriate data to produce accurate analytics reports.
“The good thing is, it creates a lot of flexibility and speed, but the bad thing is, it creates a lot of flexibility and speed,” he said. Business users “need some form of guardrails without slowing them down.”
IT and data security teams also should be aware of who’s using these technologies and how, said Michael Bargury, chief technology officer and co-founder of Zenity, a company that helps IT teams monitor use of applications by business users that might create data security risks. He said data privacy breaches can occur if non-experts connect data sets that should not be linked and use them to train models without safeguards in place.
“The business side wants to accelerate low-code/no-code and IT, and security feels like they’re losing control,” Bargury said.
Some AI practitioners themselves are leery of an onslaught of AI made by people who lack knowledge of standard processes for debugging as well as testing for quality control and reliability. They worry that people who aren’t trained on the nuances of how machine learning works could unintentionally unleash AI that makes discriminatory decisions. And some argue that low- and no-code AI tools do not produce the level of detail necessary to explain how those models decide in the first place.
Nitzan Mekel-Bobrov, eBay’s chief artificial intelligence officer, already has begun to plan for these new dangers. For now, he told Protocol, eBay allows the use of low- and no-code AI tools for what he calls “low-risk” purposes, “where there’s not really an opportunity for bias or privacy issues, etc., no fraud or cyber issues.” But the company will proceed with caution.
“We have to be very careful as we do this because we need to understand what’s being put into production in front of our customers, and as you scale that up, you need all of the instrumentation in place to be able to continuously monitor,” he said.
New AI users, new data worries
At a time when data scientists are difficult to find, companies providing low- and no-code AI say their systems fill a gap, allowing businesspeople to take advantage of AI without the need to hire highly sought-after and expensive data scientists.
“The low-code/no-code trend makes sense,” in part because “there is a talent shortage,” said Kasey Uhlenhuth, senior product manager at Databricks, where she helps train and build its AutoML tools and machine-learning models. Databricks bought no-code machine learning company 8080 Labs in October with a plan to integrate its capabilities with its existing AutoML tools so “anyone with just a very basic understanding of data science, can train advanced models on their datasets,” according to a company statement.
Making AI more user-friendly also widens the pool of customers AI tech providers can serve. SparkBeyond, which provides a platform for building machine-learning models and finding patterns in data, has found that “about 50%” of its customers using its tools in the past five years did not have data science-related roles, said Ed Janvrin, general manager of the company’s Discovery Platform business unit. The company wanted to create software that helped people who do not know typical machine-learning coding languages like Python or R use and build machine-learning models. “We wanted to expand our user base,” he said.
Many low- and no-code AI tools provide pre-made models that people can train and feed with whatever data sources they choose. That worries Matt Tarascio, the senior vice president leading Booz Allen Hamilton’s analytics and AI business in support of the U.S. Department of Defense.
“If you’re using low-code, no-code, you don’t really have a good sense of the quality of the ingredients coming in, and you don’t have a sense of the quality of the output either,” he said. While low- and no-code software have value for use in training or experimentation, “I just wouldn’t apply it in subject areas where the accuracy is paramount,” he said.
Because well-performing and accurate AI models depend on high-quality data, Seraphim said he wants to help ensure that when businesspeople at Texas Mutual use low- and no-code tools to create machine-learning models to help inform decisions, they do so with the appropriate data.
However, data restrictions are not always in place, or business teams might circumvent IT teams that are intended to protect against inappropriate data use, Bargury said. “They’re connecting two sources of data with AI in the middle, which makes it extremely difficult for a security professional to understand what’s going on,” he said, noting that business teams might not want IT involved at all when they procure or use low-code AI tools. “It’s not something that spins out of IT, and people don’t want to bring in somebody that they assume will make it slow."
You are just blinded by that veil of no-coding from the uglier stuff.
AI tools that don’t require code can also obscure important information about the data that feeds models once they’re in use, said a data scientist who requested anonymity because they did not have their employer's permission to speak on the record.
For example, if a data supply delivered through an API is cut off, an automated process might take over and replace those missing data values; this could alter the way the model was intended to operate, potentially producing faulty decisions based on the wrong information. If an automated system obscures the fact that a data feed is broken, and automatically fills data gaps, the data scientist said, “You are just blinded by that veil of no-coding from the uglier stuff. That can affect performance of your model.”
Trained AI practitioners also argue that low- and no-code AI tools produce models that are not adequately transparent about how they make decisions. “My worry with low-code and no-code platforms is that they hide all the details about model building from the practitioner and will most likely generate black-box AI systems,” said Krishna Gade, founder and CEO of Fiddler, which provides an AI monitoring platform.
The model transparency argument
Not so, say makers of low- and no-code AI software, many of whom contend that these tools actually produce AI models that are more transparent than the ones built manually by experienced AI engineers. Some systems automatically generate and archive the corresponding code that’s producing what non-coders see, for instance creating code that represents every click in a drop-down menu visualized in a user interface.
The automated machine-learning models built using Databricks software “are generating exactly the training code that a data scientist would have written to get the model,” said Uhlenhuth, who added that the code produced in digital environments called “notebooks” shows the steps taken to produce results, and includes information showing how important features are to models when making decisions.
AutoML software from Databricks also alerts developers when the system detects “class imbalance” if data imbalances might create discriminatory harms or negatively affect model accuracy. “Eventually we might be adding some kind of knob that says, ‘Hey, only use model types that are more explainable, and then it will restrict the set of machine-learning algorithms that are run,'” Uhlenhuth said.
Ed Abbo, president and chief technology officer at C3 AI, said things have changed since older low- and no-code tools produced “black box” models. C3 AI’s tools provide information that shows why a machine-learning model makes a particular prediction, such as when a model predicts that a piece of equipment is likely to fail. The system provides metrics to help users interpret and understand machine-learning results, notifying them if they’re using invalid data and, like Databricks, showing which features carry the most weight when the model makes predictions.
Ed Abbo, president and chief technology officer at C3 AI
Photo: C3 AI
In some ways, the code automatically generated by low- and no-code AI might actually provide more illuminating information about how models were built than what data scientists typically create, said the data scientist who asked to remain anonymous. Often people building models from scratch do not show their work, they said, adding that typically, “You put your model up in the cloud without documenting the training parameters.”
Still, simply showing the code does not explain how models work, the data scientist said: “I would be careful with the model transparency argument."
“Remember, the training code of the model is not the model code. It will only tell the parameters like the number of layers in a neural network, feature engineering, etc. The model itself still remains a black box,” said Gade in an email. “It is hard to know how the model will make a prediction and that creates mistrust in how to use it and how to assure customers the AI products are making the right decisions.”
What happens to a model after it is deployed also requires special attention, said eBay’s Mekel-Bobrov. “As we allow teams across the company to use no-code or low-code, and any kind of AI development, we need to have the right requirements and processes in place for ongoing monitoring,” he said.
Setting parameters
Google Cloud’s AppSheet, a no-code platform for building applications that can help to automate business processes like automatically generating invoices or sending customer service emails, provides information about model accuracy but does not generate code showing how machine-learning models are built using the system, said Peter Dykstra, a Google product manager.
While explaining how no-and low-code models work is important, AppSheet does allow users to define who can access models or apps or specific data flowing through them. “If a solution is created from some particular training data with some type of [personally-identifiable information] in it, then they can ensure only certain users can access it,” Dykstra said.
C3 AI’s system also lets users set parameters for data access. “As I log in as a citizen data scientist, there are objects and services that I can use and see, and there are others that I can’t because I shouldn’t,” said Abbo.
Despite these precautionary measures, Fiddler’s Gade said low- and no-code AI tools in the wrong hands might lead to misuse. “If the practitioners are knowledgeable, they could take the models produced by the no-code platforms and stress test them thoroughly and monitor them to make sure they are working well. But given the easiness of these platforms where people can upload a CSV and generate a model with 90% accuracy, it might give this superpower to less knowledgeable folks who could misuse it accidentally,” Gade said.
C3 AI aims to educate so-called citizen data scientists to alleviate those concerns. The company has published training materials and offers a 30-day training and certification program for its no-code AI software. “It still requires education on the concept of what AI and machine learning are, and what you can do with it and what you can’t do with it,” said Abbo.
Andrew Ng, a well-known machine-learning researcher whose startup Landing AI helps manufacturers train customized AI models using its no- and low-code tools, recognizes the risks of handing people the keys to AI without education. As might be expected, he warned against preventing non-coders from enjoying the benefits of AI. “Letting more people use AI to democratize access, that seems like a great thing,” he said, but added, “It’s critical that empowering comes with appropriate guidance and norms.”