Organizational unreadiness, lack of investment in trust and poor data quality can hinder even the best-laid plans, members of Protocol's Braintrust say.
Good afternoon! Sometimes even good plans come up short, so with today's Braintrust question, we asked the experts to clue us in on what often stands in the way of implementing an AI strategy that does exactly what you need it to. Questions or comments? Send us a note at firstname.lastname@example.org
CIO at Oracle
A lot of people would be surprised at the extent to which AI is inherently domain-specific — at times even organization-specific — and definitions of terms can vary greatly. For example, the word “short” is a positive attribute if you’re talking about the checkout queue in your retail store, but negative if you’re talking about the battery life of your new mobile phone — and entirely different in the context of financial trading. While many organizations may try out the latest and greatest general-purpose algorithms available, even the best AI will fail without the proper context and data tuning for your specific scenarios.
The other challenge with AI is that it needs to be deployed and embedded into your existing business processes. We call this “the last mile of AI” problem, and it requires significant buy-in from your stakeholders and alignment on the outcome(s) to be successful. You don’t just want random AI models sitting somewhere; you want them embedded into your existing business applications. Today, AI is typically created and used by a select few of the most advanced users. It needs to be accessible and consumable by everyone in your organization, integrated into their daily business application experience. This integration is what allows AI to improve your operations and drive effective change in your organization’s processes. The first step is ensuring your stakeholders agree on this approach and welcome it into their daily business.
Global commercial head of Data, Analytics and AI at Levi Strauss & Co.
Even the best AI strategy will fail without a focus on human intelligence across the organization. Why do organizations deploy AI? To reach their strategic vision and reach their potential. So, AI strategy needs to place both human intelligence and technology at the center. This combination will drive desired outcomes, incorporate the necessary oversight and feedback loops and get the most out of the organization’s experience, data and intellectual capital. Conversely, organizations that deploy AI solely with a focus on technology risk alienating employees, impeding adoption and creating biased algorithms, among other issues.
Instead, a successful AI strategy will start with incorporating human intelligence — not just data scientists and engineers, but the whole organization. At Levi Strauss & Co., we’re running machine-learning bootcamps to help with this. So far, 100 employees from all functions and geographies have participated in an eight-week intensive full-time program on machine learning, and we are upskilling 120 more people this year. The majority return to their roles to help implement AI — a bridge between AI and decades of human experience in the fashion industry.
Additionally, achieving a return on AI requires collaboration between AI and business functions to integrate AI among people, processes and systems of the organization. As one example, merchandise planners have decades of experience that can help them produce accurate forecasts. By combining AI models with planner judgment, we improve the overall accuracy of our demand predictions. Working together, data scientists and planners have created a revolutionary approach to planning that trains and deploys AI models, incorporates planner feedback and enables continuous learning for both humans and machines, to produce the best result for the company.
Senior managing director and global lead at Accenture Applied Intelligence
There is a surprisingly straightforward reason so many companies struggle with AI-driven transformation: Few CEOs deeply understand what AI is and how their companies can make the most of it. Too many CEOs lead their data and AI initiatives with excessive timidity — what we call “pilot-itis,” a fixation on pilot projects at the expense of scaling AI effectively. Four actions help CEOs make their AI strategy a success:
1. They should immerse themselves and their senior leaders in the details of how AI works. Broad conversations about its potential value to their organizations aren’t enough.
2. CEOs must align their data and AI strategy with their core business goals and the details of their operational strategies. This requires effective strategies to capture, store and process the mountains of data that fuel AI.
3. Because legacy platforms often hinder companies in maximizing the value of their data and AI, CEOs need to invest intensively in advanced data and AI platforms. The most successful firms build future-ready and operational data and AI platforms that tap into companies’ technology and data ecosystems. These “AI engines” balance experimentation and execution and are managed by a team of diverse AI and data experts.
4. The purpose of AI-driven transformation is to help companies better achieve their goals, not to cede fundamental control of companies over to machines. To keep humans in charge of who decides, CEOs must strive for AI-enabled decision architecture, which strikes the right balance between computer-led and human-led decision-making.
Chief AI officer at IBM
When I combed through results of IBM's Global AI Adoption Index 2022, our annual survey on global AI adoption trends, two stood out to me for the wrong reasons. These trends are contributing to the failure of AI projects and highlight the need for holistic AI strategies.
First, a lack of tools or platforms to develop models, project complexity and data complexity are problems for one in four businesses. Smart AI strategies will fail if they do not have the right data management infrastructure. AI is only useful if it's being powered by meaningful, up-to-date data, which requires an architecture like a data fabric that can make the right data available to everyone who needs it, when they need it.
Second, while organizations have embraced AI, few have made tangible investments in ways to ensure trust by addressing bias, monitoring for drift or making decisions explainable. Organizations also need to consider consumer trust. While 85% of IT professionals agree that consumers are more likely to choose a company that is transparent about how its AI models are built, managed and used, only about half have comprehensive strategies for their approach to AI ethics. If you're designing an AI strategy and you don't have a methodical approach to AI governance, it's not set up for long-term success.
Our data shows that today, almost four in 10 companies around the world are developing their AI strategies. Those that take the time to put in place holistic, human-centered data and AI strategies will reap bigger rewards long term.
Founder and CEO at Credo AI
Even the most well-thought-out AI strategy is a risky one if there isn’t an effort by enterprises to create a sustainable, long-term approach to implementing responsible AI.
The last several years have shined a spotlight on major AI threats impacting business and consumers. For example, social algorithms optimizing for engagement can divide a nation; unfair algorithms can be the reason you don’t get a job or admission to college. Moreover, a new wave of upcoming global, federal and state AI regulations requires that organizations must prepare and build out a robust approach to responsible AI governance. The more autonomy we give our tools, the wider the range of unintended consequences. Extraordinary scale generates extraordinary impact, but not always the impact we intend.
Enterprises that invest in AI governance to help them build technology responsibly and commit to responsible use of data will be able to drive trust across stakeholders: customers, consumers, investors, employees and others. Building accountability and trust in AI is a dynamic process. Organizations must align on enterprise values, codify those values, build them into their organizational infrastructure, observe impact and repeat the process with diverse voices providing input at every stage.
Senior vice president and general manager of Digital and AI at Genesys
As consumers, we’ve all been on the receiving end of AI failures. In the customer experience industry, some of the most prevalent, yet avoidable, are in the areas of self-service. Here are a few common examples:
The goal of self-service experiences is to satisfy customer needs by providing automated resources or bots to deflect the inquiry from being handled by a person. This can go awry when businesses haven’t determined the right outcomes or they don’t have the correct content available for optimal deflection. Many times, this results from bad scoring or insufficient responses, which leads to confused bots and unsuccessful self-service experiences.
Another common pitfall is when responses are delivered in an inconducive or unusable format for users. One example would be if a business pushes a customer using a smartphone to a web page instead of a mobile app, which breaks the digital experience.
We also see AI not connected to the right business processes to handle graceful and empathetic escalation to human-based support. To avoid this, businesses must pass the full history and context about the customer and why they are engaging with your business so they can meet them where they are without friction.
Another common issue for businesses when deploying an AI strategy is not fully leveraging or connecting their customer data across systems. This often forces them to re-authenticate to get support or completely fractures the experience, causing customer frustration to mount.
CIO at Freshworks
The digital transformation process has taken over companies, governments and other organizations, but for a variety of reasons, the actual implementation of these disruptive technologies can be a challenge. One example is artificial intelligence, which impacts everything from organizational to customer success. A lack of critical talent and poor data quality, or incomplete data, can lead to even the best AI strategies failing. Organizations can tackle these issues head on by identifying and upskilling talent within their workforce and determining the right parameters when collecting data so the quality of data is up to the standard needed to build a successful AI model. Testing the data is also important to make sure you are collecting what you need.
See who's who in the Protocol Braintrust and browse every previous edition by category here (Updated May 12, 2022).
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust