One of the most impactful, but under-discussed, uses of AI will be in creating an efficient, just-in-time, transparent supply chain. Are you asleep yet? I hope not.
We can imagine the most obvious benefits to businesses and consumers: if demand is accurately predicted, and all business owners — from locally owned companies to big-box stores — can understand where their products are and how to get them to us cheaper and faster. However, there are other, more broadly beneficial, implications of an AI-enabled supply chain that you likely didn't think about. Here are two:
Human traffickers often take advantage of organizational silos and inefficiencies in tracking goods and financial records as a way to hide their illegal business. A positive impact of an AI-enabled transparent supply chain is in identifying human trafficking via financial fraud detection in bills of sale, anomaly detection in shipping and transport, or by maintaining a system to track and store data from RFID tags on goods.
One of the most discussed issues of our time is climate change. Did you know that nearly 1.3 billion tons of food simply rots on shelves in stores due to poor infrastructure, and that this waste is identified by the U.N. as one of the top sustainability problems worldwide? Better predictive models to determine demand, combined with tracking technologies, can reduce that waste.
While likely one of the least headline-grabbing use cases for AI/ML, the halo impact of predicting demand, understanding where goods and raw materials are, who is handling it, and when, will create a beneficial ripple effect.
Former Chief Technology Officer at Airbus
Much of the buzz today is around AI/ML in product applications, but the most profound impact will be in the use of AI/ML to usher in a new scientific revolution — a total transformation in the way scientific discovery, technological invention, and engineering design is done. New discoveries do not just spring from the genius mind of the inventor. They are, rather, the product of a laborious cycle of guessing and testing, and guessing and testing. The genius comes in steering the guessing so it isn't just random, but likely to find something new and useful before funding runs out. A colleague once memorably called it the "graduate students with pipettes" paradigm of science dating back to the 17th century.
AI/ML is poised to change this in two big ways. Since modern AI/ML techniques are fundamentally pattern matching at a massive scale, they appear to be quite good — seemingly better than humans — at adaptively directing the iterative guessing and testing, learning as they go. Also, if the testing bit can be replaced with computer simulation, each cycle can be faster and cheaper. The problem is that detailed models of many physical phenomena are prohibitively computationally intensive. Here, AI/ML techniques can help create simplified "surrogate" models to substitute for the detailed physics, which in turn can substitute for lab testing. We are already starting to see promising results from these uses of AI/ML in the discovery of new materials and drugs, and in the design of aircraft engines. This is just the beginning. Thanks to AI/ML, I think we are in the very early days of the most significant shift in the scientific method since the Renaissance.
Dr. Seth Dobrin
Chief Data Officer at IBM
AI to enable safe and secure AI, reducing bias and increasing transparency.
Co-founder and CTO at Spacemaker
Most "pure AI" applications struggle to live up to the hype, often driven by the difficulty of delivering on the promise of completely eliminating the human from the loop, and covering all edge cases.
Intelligence augmentation, however — using AI to give humans superpowers rather than replacing them — can seem less sexy and disruptive, but has a clear path toward deep impact on the nature of work. Avoiding the need to perfectly emulate the truly "human" skills, an augmenting approach keeps the human in the loop, leveraging their creativity and implicit knowledge in the decision process.
From back-office claims handling to complex design work, there is huge potential to incrementally "fill the gaps" and improve speed and quality of work. In many ways, this is the natural evolution of the type of value "digitization" has been delivering for several decades — just taken to the next level.
Senior Analyst at Altimeter
It's less an application than it is a mindset. There's just a lot of hype around AI, and it doesn't serve anyone in the long run. Partly that's because a lot of the narrative around AI tends to imbue it with the kind of mystical capabilities learned from watching "The Matrix" or "Terminator," rather than a grounding in the actual technologies as they exist today.
What's important to remember is that machine learning is only as good as the data that teaches it (and the data model that organizes it). So if you don't have enough or the right kind of data, AI will fail, sometimes spectacularly (see the work of Janelle Shane for excellent and hilarious examples). And AI has no sense of context, so the more concrete the problem, the better it will perform. Finally, AI can't actually "predict" anything — it's not magic — but it can reveal patterns that may indicate what will happen in the future. It's a nuance, but an important one.
Where AI/ML does shine is for use cases that are clearly defined, rich in data and that include enough characteristics and patterns that insight is possible. So, for example, forecasting the spread of a virus, the extent of likely damage in a natural disaster, the likelihood that a sales lead will convert are all solid use cases for AI. The key is to find the patterns that suggest a course of action. The sweet spot lies somewhere between confirming the obvious (the pitfall of many early AI projects) and trying to solve for events of such low probability that there isn't enough data to suggest a pattern. TL;DR: If it sounds too good to be true, it probably is.
See who's who in Protocol's Braintrust (updated Feb. 26, 2020).
Kevin McAllister (
@k__mcallister) is an associate editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.