To the uninitiated, a tiny stain on several yards of car seat upholstery or a minuscule gas bubble on the surface of an industrial oil pipe might seem like an insignificant imperfection. But factory inspectors are always on the lookout for these sorts of defects, because they can create serious slowdowns in time-sensitive manufacturing production schedules.
Cameras and computer vision software have been used to spot product flaws in manufacturing facilities for decades, but today companies including Landing AI and Mariner are helping take defect detection to the next level with AI software. Rather than offering off-the-shelf AI, these companies are betting that manufacturers want highly customized algorithmic models to monitor for product defects.
And they have another selling point that flies in the face of what we know about most big data-hungry AI systems: Their models work using very small datasets.
Manufacturers like Stanley Black and Decker, which makes industrial equipment for construction and infrastructure such as forklift attachments and oil pipes, use Landing AI’s system to perform inspection tasks that can be difficult for humans to do well. For instance, in order to decipher defects such as imperfect welding surfaces inside a giant pipe, the company has relied on humans to scrutinize X-ray images on dark room monitors, said Amir Kashani, director of AI and digital products for the company’s industrial division.
“You’re looking at tea leaves of cloudy shapes,” Kashani said. AI can spot those sorts of obscured defects more readily, he said.
IBM, Microsoft and AWS all offer AI-based tech to help manufacturers detect product flaws. Ford Motor Company in December said it planned to expand its use of IBM’s Maximo Visual Inspection platform, which the automaker has used to find defects at multiple inspection stations inside several manufacturing plants, for example.
Photo: Landing AI
There are many other players offering AI-centric defect detection technologies for manufacturers and industrial clients, too, including Aetina, Kitov.ai, Omron, Vinsa and Cognex — a 40-year-old provider of factory cameras and machine vision technologies. Some sell AI-based visual inspection systems tailored to specific sectors; Uster, for example, sells cameras and automated defect detection software to the textile manufacturing market.
But companies like Landing AI and Mariner argue that the pre-built machine-learning models offered by some of those companies don’t provide the level of control or customization manufacturers want. Landing AI CEO Andrew Ng, a machine-learning guru who served as head of Baidu’s AI group from 2014 to 2017 and founded Google Brain’s deep-learning engineering team in 2011 before co-founding online education provider Coursera, said he expects the manufacturing industry will need “tens of thousands” of unique AI models for each and every unique type of product and imperfection.
That’s going to take a lot of work. Landing AI received $57 million in Series A funding in November, and the company is set to announce Wednesday that it has hired David Dechow to enhance Landing AI’s machine vision capabilities as its vice president of Outreach and Vision Technology.
Machine-learning models that are not built to specific company needs could cause new problems, said David Dewhirst, vice president of marketing at Mariner, which sells visual inspection hardware and image processing software that attaches to factory camera systems.
“Detecting whether one of five bolts is on a wheel assembly is a very specific scenario, and you want to customize a model for that.”
He said he’s seen systems built from models that were not trained with company-specific visual data mistakenly flag a harmless speck of lint when looking for real defects like stains, for example. False positives can trigger unnecessary production line slowdowns, which can cost time and money. Through its partnership with Microsoft Azure, Mariner provides its Spyglass Visual Inspection product to vehicle fabric maker Sage Automotive, which uses Mariner to build models trained with its own company-specific image data, according to Dewhirst.
IBM does include pre-trained models with its Maximo inspection product, but also lets customers fine-tune their accuracy by ingesting and learning from new data associated with specific use cases, said Rishi Vaish, chief technology officer of IBM’s AI Applications division. “Detecting whether one of five bolts is on a wheel assembly is a very specific scenario, and you want to customize a model for that,” he said.
The problem with big data AI for flaw detection
When the manufacturing process is going smoothly, product anomalies don’t happen often, and that actually creates a dilemma for manufacturers that want to use AI to spot them.
The deep-learning algorithms that have come to dominate many of the technologies consumers and businesspeople interact with today are trained and improved by ingesting huge quantities of data. But because product defects show up so rarely, most manufacturers don’t have millions, thousands or even hundreds of examples of a particular type of flaw they need to watch out for. In some cases, they might only have 20 or 30 photos of a windshield chip or small pipe fracture, for example.
This is why companies such Mariner and Landing AI provide software that can produce defect-detecting models from relatively small datasets. For manufacturers or industrial customers working with international IT systems integrator Capgemini Technologies, that might mean as few as 40 images.
Providing thousands of images to train a model to detect a break in a piece of equipment or surface coating imperfection is “not feasible,” said Marcos Fernandez, director of Engineering Consulting at Capgemini. The company recently began using Landing AI’s LandingLens software to build custom defect detection models for clients including Portuguese electric utilities company EDP, which is using it to detect broken parts on electric tower equipment. “It works with a few images and that’s very important for our clients,” said Fernandez.
“Data is food for AI,” said Ng, who emphasizes data quality over quantity when it comes to training deep-learning models for Landing AI’s customers, which also include electric vehicle battery maker QuantumScape.
“It’s not just about massive numbers of calories; it’s about good food,” he said. As an influential AI engineer, Ng recognizes that he has come 180 degrees from where he was a decade ago as an evangelist for pumping giant datasets into neural networks. Now that the foundational coding work of neural networks is done, he said, “the thing that took us a while to figure out is how to engineer the data to feed to that code.” The secret, he said, “is making sure that there’s 50 really good images.”
Amazon says companies using its automated Lookout for Vision machine-learning service for quality inspection “can get started with as few as 30 images,” if lighting, image zoom and focal points and alignment are constant. However, its product documentation states, “If you already have a high number of labeled images, we recommend training a model with as many images as you have available.”
Finding ground truth in ambiguity
For AI, data quality goes hand-in-hand with data labeling or annotation. When it comes to data consisting of product defect photos, part of the data annotation process involves drawing a box — known in the AI world as a “bounding box” — around the boundaries of the defect image itself, then categorizing it. In keeping with the low-code AI software trend, companies like IBM, Landing AI and Mariner all provide tools that enable actual quality managers or inspectors to do that labeling work, rather than requiring engineers or software developers to do it.
But while one subject matter expert in one country might refer to a steel welding defect as a “slag,” a colleague in a different locale or even just from another company might call it by a different name. And even if they agree on what to call a certain sort of imperfection, they might argue over how to define it or even whether a defect is present at all.
“There’s no ground truth,” said Kashani. “Two experts might not agree.”
Because labeling inconsistencies can trip up deep-learning models, Landing AI aims to alleviate the confusion. The company’s software has features that help isolate inconsistencies and assist teams of inspectors in coming to agreement on taxonomy.
“The inconsistencies in labels are pervasive,” said Ng. “A lot of these problems are fundamentally ambiguous.”