Training data scale, black boxes and sensor calibration are among the factors that can derail AI in IIoT environments, members of Protocol's Braintrust say.
Chief Product Officer at Samsara
From construction companies adopting just-in-time logistics, to trucking companies reducing accidents, to city governments lowering air pollution, we've seen our customers adopt data and AI to transform their operations. It's an exciting time to be working with AI! But deploying AI into production IoT environments is still complex. These are a handful of the top challenges any technology company must address when doing so:
- First, do you have access to real-world training data, at scale? Samsara is able to leverage our million-plus connected devices to improve models, which dramatically improves what can be done.
- Second, do your models need to run at the edge? If so, design from the get-go for the constraints of embedded systems. This can be radically different than designing AI that runs in the cloud.
- Third, budget accordingly. If you're training models that can be used by tens of thousands of customers, you can take a different approach than if you're designing a system to use yourself.
- And, finally: privacy and ethics. Depending on your use case, you may need a rigorous privacy and ethics review process. AI is a new frontier with incredible benefits when used correctly. It's up to all of us to design technology ethically and with a privacy-first mindset.
AI can bring incredible benefits to society, from increased safety to efficiency and sustainability. We are still in the early days, and are excited to see what comes next.
CTO at PTC
The biggest hurdles to using AI in IIoT are similar to general challenges deploying AI in any enterprise, and come down to two main reasons. Firstly, poor understanding of use case leading to an inadequately framed analytics question that can't be answered by an AI-driven solution in production. Ultimately, this is indicative of AI experimentation being misaligned with business value. We've also seen that even well-defined use cases often suffer from misalignment with supporting data (a lack of data or limited creativity in where to find data that best describes the overall problem).
Manufacturing leaders who are successfully applying AI in IIoT are able to articulate a transformation vision and effectively use that vision to drive a wide array of stakeholders including technical, data and domain experts through multiple iterative cycles with a relentless focus on business value. Leaders are also able to easily empower teams to be flexible with the way they frame problems for AI to solve, and do so with greater and greater precision as they iterate.
On the whole, we found that inability to coordinate resources around a clear transformation vision results in a few common patterns we see in failed AI projects: inability to integrate data, unclear success criteria, ML models that can't be deployed, technology silos or an inability to devise a process that can exploit AI-driven insights. This all leads to stalled projects or projects that fail to live up to unrealistic ROI expectations. These common failure points are really symptoms of a more fundamental misunderstanding of how AI works and what it takes to make AI successful. Thus AI fluency as an overall corporate priority remains incredibly important.
Director of Strategic Partnerships & Planning, Honeywell Connected Enterprise
AI-driven efficiencies have become essential, not just optional, which is why I'm particularly excited about "AIoT." There are three areas in IIoT architecture that can serve as either hurdles or opportunities when planning for "AIoT": talent, change management and trust.
- Talent: Having the right teams in place can make or break transformation. Industrial enterprises wish to modernize and adopt SaaS, IoT, ML, AI, etc., only to face an ongoing struggle across the organization. Resource constraints, missed timelines and quality issues stem from a gap in specific AI engineering expertise or focused roles. Hire for unique skillsets (rather than "IT generalists"), enable additional training and development and create an AI-specific organizational structure.
- Strategic planning in change management: Implementing and scaling AI requires a multi-year strategy that can be cost-intensive initially with high upside for ROI over time. To employ such a strategy, existing IoT or structural constraints — like bandwidth or edge requirements — must be well understood by the organization. For factors like talent, knowledge utilization and scale, strategic partnerships can pave the go-to-market path while reinforcing a unified message that addresses unknowns in a way that helps retain talent during periods of change.
- Trust: Whether you're building, buying or partnering to achieve AIoT objects, understanding legal elements and data ownership, exposure and access is paramount. Once successfully deployed, AIoT can enhance privacy and security and ultimately mitigate organizational risk, but understanding the process during deployment is crucial to lasting success.
"AIoT" is highly complicated given "big data" requirements that lead to a powerhouse effect of efficiency, continuously improving its capabilities and decision-making skills.
VP and Sr. Principal Engineer, Internet of Things Group at Intel
Complexity and cost remain among the biggest pain points hindering AI adoption in IIoT adoption.
Many industrial companies do not have in-house AI capabilities or extensive training data-sets and are looking for turnkey, customizable solutions that deploy innovative models or algorithms at the application level, which speak the same language as the existing infrastructure on the factory floor. In order to generate lasting value, the insights from algorithms and corresponding actions must exist on a closed loop, so all the machines in the factory can benefit from real-time feedback. In order to achieve this, the output of the AI model must have the ability to communicate with the varied infrastructure on a factory floor, with processing occurring at the edge.
AI will need to demonstrate tangible benefits in order to justify its ROI. If the AI algorithm identifies a defect and texts the factory manager about the issue, that does little to help drive efficiencies and cut costs. However, if the output of the AI algorithm communicates directly with the machine and instructs it to adjust the right machine parameters so that defect no longer occurs, it can provide true and lasting value.
At Intel, we know adoption of AI will increase only when it is made more accessible and the barriers to entry are lowered. Intel's OpenVINO Toolkit combined with Intel's Edge Insights Software are great examples of such open and scalable infrastructure; they use some of the best open source ingredients combined with unique approaches to solving the most challenging technical complexities to help make AI-based applications deployable in a scalable way.
Senior Vice President and Chief Technology Officer at GE Digital
The implementation of Industrial AI can transform a company's assets and systems by capturing data more accurately, using data from different sources to increase business insights and strengthen predictive capabilities. But, there are barriers and risks that come with expanding and using digital touchpoints.
The first hurdle is understanding the business process(es) that could be improved using data. This involves determining how that data is used in the current process and how that process would be modified to get the improved business results. Understanding the process would tell you the data you need, where you could get it from, the AI models that you would use to make the decision of whether or not to change the process, and how you would justify that decision.
Another key hurdle is around the acceptance of the AI's output. Many AI systems are black boxes in which the "logic" of their output cannot be readily understood by the layperson. There is the need to construct AI models with a correspondence to physical models.
It's important to keep in mind that scale and implementation may not be easy at first, but as more industrial assets are digitized, the opportunity for us to use data to improve business processes and business value grows exponentially.
Chief Product and Marketing Officer at Pelion
The value of AI is built on trusted data. Industrial enterprises struggle to feed their AI-IoT with secure data at scale due to challenges around latency, reliability and bandwidth constraints. But by 2025, Gartner expects that 75% of all data will be processed at the edge, as organizations look to mitigate these issues.
Of course, performing data processing and running AI at the device or gateway level presents its own challenges. The fast pace of software and AI evolution changes the game for industrial IoT deployments. Looking ahead, we must assume that no hardware components are static; rather, they need to be built on a resilient and updatable foundation that will be able to accommodate the rapid evolution of software. Connecting, managing and updating these components will be key in producing trustworthy data, for training and generating value from AI models.
Pre-IoT, when a manufacturing facility was a closed system, the only software came from the OEM who had no concerns about interoperability with any external frameworks! Now, as companies incorporate more machine learning and AI, they'd be wise not to tie themselves to one application or ML application processing framework that could limit the AI models and application tooling they can accommodate. A diverse ecosystem is best served by a neutral foundation.
Senior Data Science Manager at World Wide Technology
Over the last few years, we have developed and deployed IoT-based solutions in Industries. These solutions range from video analytics on the edge to sensor-based preventative maintenance schedules, but there are common themes that emerge as challenges in all implementations.
- Hardware Installs – Harsh conditions in an industrial setting can cause problems which otherwise may not surface. A valuable sensor placed at the wrong location can be the difference between a great and mediocre predictive model. An example would be our experience with installing smart cameras at a surface mine in Arizona. Dusty lenses, extreme heat, vibrations in the mounting, sunlight flares at certain times of the day were some of the challenges we had to overcome to get to a robust AI solution.
- Continuous Calibration – Ensuring accurate values and identifying a need for sensor calibration becomes important so that AI models which consume these values can produce reliable results. Sensor drifts or changes in operational conditions need to be detected and accounted for in the models. With a lot of focus given to production value that AI brings, the time and complexity in maintaining underlying IoT devices can often go ignored.
- Change Management – The non-tangible hurdle when introducing AI in any setting is its interaction with humans. This is amplified in an industrial setting where changes to processes and production lines are uncommon. It requires a collaborative development approach with SMEs, operators and end users to leverage their decades of operational knowledge to validate predictive models and build trust.
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust