The world needs an AI code of ethics

The AI revolution has already started, and business, academia and government need to act together now to prevent harm.

Human hand touching with cybernetics robot

Devising a globally accepted set of AI principles and convincing institutions to adopt it will be a heavy lift.

Image: Yuichiro Chino/Getty Images

Bishop Garrison is vice president of government affairs and public policy at Paravision.

It is an iron law of progress that any innovation that benefits society also has the potential for harm. We saw it with the train and the automobile. We can already see it with genetic engineering. And now we are seeing it with artificial intelligence.

Every day brings a new report of how artificial intelligence is opening up new opportunities to detect disease and eliminate hunger, to understand the nature of the universe or to combat climate change. Yet darker uses are also emerging, including deepfakes, disinformation and autonomous weapons systems capable of using lethal force without human intervention.

We find ourselves on the doorstep of the next great societal challenge: harnessing the benefits of artificial intelligence while also ensuring it is used ethically and responsibly. It is our responsibility to establish processes and policies now to determine whether AI will be helpful or harmful in the future, and how we will protect against illicit or dangerous use. The problem is manifold: How do we ensure the private sector develops this technology ethically? What do AI ethics even entail? How do we keep social biases from being embedded in and amplified by AI?

These are not rhetorical questions. They represent issues of generational concern that require both great debate and an enormous amount of collaboration between the public and private sectors. Business leaders, academics and public servants must trust one another to smartly and thoughtfully devise these solutions together. No one group will have the answer, and no singular entity will know what is in the best interest of society regarding a technology with potential that rivals — and perhaps exceeds — any yet developed in human history. We must establish these tools of trust.

The stakes posed by this dilemma — our need to get this right — make it a formidable but not impossible task. During my time in government, I participated in initiatives that navigated a host of intricately detailed, politically charged and complex problems. We rethought the future of passenger screening at the Department of Homeland Security. We supported efforts to combat sexual assault and sexual harrassment within the ranks at the Department of Defense. And I led a group of dedicated career civil employees and uniformed service members to devise recommendations on countering extremist activity within the U.S. military. I learned that tackling these problems takes an investment in people and as well relationships built on trust.

These were all difficult tasks, on a professional and personal level. They were politically sensitive, divisive and required a whole-of-community approach to develop solutions that addressed the root causes of each problem. These issues are no less complicated, but also no less urgent. Given the projected long-term capabilities of AI, ensuring its ethical use is paramount to the safety and security of the global community.

If we do not come together now to address these challenges, the consequences will be devastating for our nation and societies worldwide. This will take governments at all levels working hand-in-hand with industry, academic, community and policy leaders in a lasting public-private partnership. Determining the proper oversight, regulatory framework and resourcing will be critical to create standards that result in the least harm and the most good while simultaneously ensuring that innovation and creativity in this arena do not become stifled or suffer a chilling effect.

Critics may object to the entire project on the grounds that ethical standards can only make U.S. AI companies less competitive. They may agree with former Google CEO Eric Schmidt, who once argued of face recognition, “there is no U.S.-China contest; the United States has essentially conceded the race because of concerns over the average individual’s privacy, and deep reservations about how this technology could be deployed.”

While I greatly respect his experience and acumen, I disagree. The performance of U.S. face recognition companies in recent government benchmarks suggests that principled development is no obstacle to world-class AI performance. Perhaps more to the point, we have never been a country to run from the difficulties associated with solving hard problems or making generational breakthroughs. This is a time of challenge, a time of controversy, and it is our time to be measured.

Make no mistake: Devising a globally accepted set of AI principles and convincing private and public institutions to adopt it will be a heavy lift. But ignoring the challenge is not an option. If we want a future consonant with our values, we need to address this problem now, together, with every asset at our disposal.

Latest Stories