August 27, 2020
Questions on ethics, geopolitical implications and deepfakes are top of mind for members of Protocol's Braintrust.
Dr. Rumman Chowdhury
Global Lead, Responsible AI at Accenture
In a time of significant shifts and change, my question for the presidential candidates would be: How will AI be governed in the public sector so that the organization's mission can be achieved while also providing protections for individuals?
As AI blurs the line between public and private use, how do we create safeguards to ensure all people benefit? Public use of technology differs from commercial use; while a private company can create an app or a product that addresses the needs of a specific subset of the market, technology that is created in public service must address the needs of all citizens. What are the kinds of governance practices, guidelines, assessments, standards and assurances needed in the design, development and deployment of AI to foster responsible and trustworthy use of AI?
Executive Director at the Partnership on AI
"AI policy" encompasses a wide universe of policy issues, given that "AI" is not just one technology. It spans so many diverse domains, each of which has its own unique effects to consider.
When I worked in government during the Obama administration, AI policy constituted a list of fairly abstract issues and opportunities faced by the global public sector at the time, which proved hard to get government agencies to focus on. In less than five years, this space has completely transformed: Salient, urgent questions have been articulated related to AI and geopolitics, national and international security, social and racial justice, safety, research and development, labor and the economy, scientific discovery, platform regulation and content moderation and more. The stakes are very high in many domains, given the scalability of AI technologies and the sensitivities of some contexts in which it is being used (as in criminal justice and law enforcement). And there is a widespread, important debate about how government can best ensure that the incentives of institutions like corporations are balanced against the needs of individuals and society.
Fundamentally, it is the responsibility of government to ensure that rights and liberties of citizens are protected, and that society can responsibly leverage science and technology to solve the most pressing challenges facing us. Related policies formulated in the next five years will likely shape norms and opportunities that will affect generations. Most governments around the world are woefully unprepared to meet these needs, given limited capacity to understand the technical and societal implications of AI, and to horizon-spot the next big questions that will need answering. This makes the work of organizations like the Partnership on AI and so many others in our network vital.
So, broad but important: In what ways will your government be prepared to meet the needs of the American citizenry when it comes to AI development and deployment?
President and CEO at Consumer Technology Association
The U.S. and our fellow freedom-loving Western democracies are in a global race with China — a country with far different views about the role of tech, privacy and freedom — to lead in artificial intelligence. The stakes are high; leadership in AI will largely determine a nation's military readiness, economic competitiveness and ability to fight future pandemics.
The Trump administration made AI a research and development priority in 2017 — the first U.S. president to do so. According to the Consumer Technology Association's International Innovation Scorecard — a ranking of countries based on innovation-friendly policies — the U.S. scores well in the categories of self-driving vehicles, drones and R&D investment. Now, we need leadership on AI from the highest level of government as China continues to pump billions of dollars into its bid to surpass the U.S. and become the world's AI leader by 2030.
As president, what will you do to ensure that America is the unquestioned global leader in AI research, innovation and implementation?
AI Ethicist, Criminologist, Criminal Psychologist and Columbia University Community Scholar
It is important to know what role big tech is playing in both campaigns and how much Big Tech has contributed to those campaigns, in dollars, talent and promises. Both campaigns need to explain how its political structure will restrain data monopolies and encourage real-time antitrust enforcement. What protections can we expect and how rigorous and robust will those policies be?
When it comes to AI, given its power, pervasiveness and penetration into every aspect of our existence and its potential to upturn the global balance of power, we need to know what policies will be in place to mitigate the risks and threats of AI. Given that AI is a major geopolitical force, what policies or programs will be in place to respond to China's promise of AI supremacy by 2030?
Data protection is a critical theme running through some of the most challenging and chilling questions of AI. Government's use of our data is a major concern that must be addressed within the context of government's use of algorithms and the importance of AI ethics, accountability, transparency and explainability. It is also about balancing the algorithmic scales of justice against a backdrop of racial justice, social justice, economic justice and environmental justice.
There is an algorithmic power imbalance that needs to be recalibrated to address ethical harms such as bias, discrimination, systemic racism and fairness. There is a lot of work to be done, and little time, so we need to know if either campaign has the political will to call Big Tech to account, to atone and to end the vocabulary of apology with actionable solutions to promote diversity, equity and inclusion in AI.
CEO at Truepic
In recent years, online fraud and disinformation have challenged businesses, consumers and societies around the world. Visual deception is a growing threat, exacerbating these issues. Simple modifications to videos, such as the digitally slowed video of Speaker Pelosi to make her appear intoxicated, went viral on social media almost instantly. Fake videos generated using artificial intelligence, more commonly known as deepfakes, introduce an even greater threat given how realistic these fabricated videos appear to the public. Due to these innovations in synthetic media, bad actors can leverage social media to sway public opinion and deceive millions over social media almost instantaneously.
What is your administration going to do to ensure that the rise of fake content and the lack of accountability online, largely afforded through section 230 of the Communications Decency Act, will not create a complete trust fall-out across our nation?
See who's who in Protocol's Braintrust (updated Aug. 27, 2020).
Questions, comments or suggestions? Email firstname.lastname@example.org
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust