Abhishek Gupta is the founder and principal researcher at the Montreal AI Ethics Institute and senior Responsible AI leader and expert at Boston Consulting Group; Steven Mills is the Global GAMMA Chief AI Ethics Officer at Boston Consulting Group.
The Responsible AI (RAI) domain is at an inflection point: We are moving decidedly from principles to practice. As organizations mature their understanding, they are feeling the pressure to act from customer demands and impending regulatory requirements.
RAI means developing and operating artificial intelligence systems that align with organizational values and widely accepted standards of right and wrong while achieving transformative business impact.
But successfully operationalizing RAI requires a leader with the right mix of knowledge, skills, abilities and experience, and RAI remains a nascent field. The pool of experienced RAI leaders is extremely limited. As a result, many organizations have relied on legal and public policy experts.
This is invaluable expertise, particularly when faced with the emerging AI regulatory environment. But there is also an acute need for up-to-date technical understanding of AI systems, including their capabilities and limitations, to avoid leaving vulnerabilities and risks unaddressed.
For example, a new class of image-generation techniques called diffusion models are leaping along at an incredible pace: We’ve gone from being shocked by capabilities of DALL-E 2 last April to even newer systems providing richer and more varied outputs like Midjourney and Stable Diffusion only a few months later. While this technology is truly astounding, there are an important set of emerging ethical issues that RAI leaders will need to navigate.
Abhishek GuptaPhoto: BCG
The novel ethical issues arising in these cases can certainly be navigated with expertise in law, public policy and philosophy. In fact, this expertise will be critical. However, addressing issues with modern AI systems, especially commercially viable production-grade systems, requires an up-to-date technical understanding. This spans the mechanics of system development and operations as well as the surrounding technical infrastructure.
And this goes beyond simply having technical expertise and experience — having recent experience with cutting-edge tech is equally important. For those whose latest technical experiences date back 10 years or more, this can present a challenge.
Let’s start with the fact that the recent wave of technical progress in deep learning really only started in 2012; techniques as ground-breaking as the Transformer architecture only date back to 2017. The pace of research and development is unrelenting, and when the time comes to incorporate technical controls and measures to address and mitigate ethical issues, this can become an issue. Recency in the technical experience of an RAI leader makes it more likely that their comprehension of the likely arcs of R&D in the field (for example, progression of capabilities of large language models) will be more scientifically grounded and probably more accurate. This can then help them craft strategic and tactical investments that are “future-proof,” i.e., they will be robust to major technological advancements.
None of this is to say that a purely technical approach is the solution. It is well documented that AI systems are sociotechnical in nature and hence require sociotechnical solutions, but computing plays a role in building the foundation to meet these challenges head on.
Traits to look for in an RAI leader
- Solid technical communication skills. From an organizational standpoint, an RAI leader is someone who must interact with and coordinate among technical stakeholders in addition to other functions like privacy, legal, risk and compliance, marketing and sales. The level of success that the RAI leader achieves depends not only on the degree of coordination they can orchestrate between all these functions, but also in being able to provide clear guidance with a forward-looking approach that accounts for the evolving capabilities and limitations of emerging AI systems.
Understanding the technical functioning of both AI and the software infrastructure supporting the system without the need for tutorials from the team rapidly builds trust, accelerates interactions between various functions and provides more clarity to plan for and mitigate risks. This also has the effect of being able to better relate to the needs and ways of working of practitioners, making the proposed plans much more realistic in their implementation.
- Experience with production-grade AI. Another key consideration is exposure to production-grade AI systems. Challenges arise in such production-grade systems that are not encountered in more academic, sandbox AI systems that are tested on benchmark data sets. Some of these include failure modes when the system is deployed in a distributed computing setting; novel attack surfaces from a machine-learning security perspective might also emerge when you plug in an AI subsystem into your broader software infrastructure, among other risks. Industry experience also equips the RAI leader to appropriately select mitigation strategies that are likely to work at scale and over time rather than theoretic solutions that might not succeed when they meet the real world.
- Openness to tech training. The RAI leader can be trained to understand how AI products are developed by shadowing technical counterparts and embedding themselves more deeply in the technical workflows of the organization to supplement their prior knowledge if they come from a non-industry background. In general, the executive team should prioritize time and resource allocation toward equipping them to be successful in their role.
- Knowing how to balance theoretical and practical ethical goals. An RAI leader with recency in their technical experience and exposure to production-grade AI systems can bring a degree of confidence to the RAI program such that you can deftly tackle emerging risks while innovating responsibly. It helps minimize cases where the approaches to address ethical issues are too theoretical, impractical and limited in their scope to achieve the goals stated in RAI principles. This can be partially compensated by surrounding the RAI leader with the right technical advisers, but it makes the implementation and likelihood of sustained success more difficult.
Navigating arising ethical challenges through a technical lens
When you have an RAI leader who has experience with the challenges outlined above, they can bring a degree of nuance to solutions that can’t be obtained through consultations with a technical team. This happens because of a translation loss when a technical team has to distill its insights and experience for someone who doesn’t have that background, and the embodiment of that knowledge and experience within the RAI leader themselves means that they can more easily integrate that into the broader strategic directions they have in mind, which they in turn might not be able to perfectly (or sometimes even at all) share with the technical team.
If you are building out your RAI program and looking to hire an RAI leader to kickstart that journey, keeping in mind the recency of their technical experience or how you will help them build that expertise will be a critical factor in whether you’re able to achieve sustained success. As AI systems continue to evolve in what they can do, both in their breadth and depth, having an RAI leader who can effectively navigate that through a technical lens will be the differentiator between good and great program implementation.
Even if they don’t yet have this in their repertoire, planning and investment in equipping them with that knowledge and experience will help you yield significantly more of your overall investments into designing, developing and deploying AI systems more responsibly.
This story was updated to correct the spelling of Abishek Gupta's name.