It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.
Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.
Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:
- People should be protected from unsafe or ineffective automated systems.
- They should not face discrimination enabled by algorithmic systems based on their race, color, ethnicity, or sex.
- They should be protected from abusive data practices and unchecked use of surveillance technologies.
- They should be notified when an AI system is in use and understand how it makes decisions affecting them.
- They should be able to opt out of AI system use and, where appropriate, have access to a person, including when it comes to AI used in sensitive areas such as criminal justice, employment, education, and health.
What’s not in the AI Bill of Rights
While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI such as systems that identify people in real time using facial images or other biometric data, or for use of lethal autonomous weapons.
In fact, the document begins with a detailed disclaimer noting that the principles therein are “not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities.”
Alondra Nelson, the OSTP’s deputy director for science and society, pushed back on suggestions that the document could disappoint human rights and AI watchdogs who had hoped for a document recommending more concrete rules for AI.
“I categorically reject that kind of framing of it,” Nelson told Protocol. “The document moves as the title says from principles to practice. Upwards of 80% of the document is about precise prescriptive things that different stakeholders can do to ensure that people’s rights are protected in the design and use of technologies,” she said, adding, “Our job at OSTP is to offer technical advice and scientific advice to the president.”
A year ago, Nelson and former OSTP Director Eric Lander co-authored a splashy Wired opinion piece announcing the agency’s plans to produce an AI Bill of Rights that might help alleviate problems with AI systems that had been unleashed by industry for use with no federal regulatory guidelines.
Nelson and Lander mentioned AI systems that reinforce discriminatory patterns in hiring and health care as well as faulty policing software using inaccurate facial recognition that has led to wrongful arrests of Black people. And, linking to an article about surveillance tech used in China to track and control the Muslim minority Uyghur population there, they alluded to use of AI by autocracies “as a tool of state-sponsored oppression, division, and discrimination.”
I categorically reject that kind of framing of it,” Nelson told Protocol. “The document moves as the title says from principles to practice.
Soon after the announcement, OSTP held several public listening sessions in November 2021 on AI-enabled biometric technologies, consumer and “smart city” products, and AI used for employment, education, housing, health care, social welfare, financial services, and in the criminal justice system.
While some advocacy groups have indicated frustration with the slow process for publishing the AI Bill of Rights, Nelson said by one measure — the Biden-Harris administration’s Summit for Democracy held in December 2021 — it is actually early.
“We had committed by December to finish this, and we are completing it with a little bit of time to spare,” Nelson said.
Scandal has plagued OSTP this year. Former OSTP Director Lander resigned In February amid accusations he created “an atmosphere of intimidation at OSTP through flagrant verbal abuse.” Later in March, POLITICO revealed that Lander had helped enable an organization led by former Google CEO Eric Schmidt to pay the salaries of some OSTP staff.
Lawmakers have proposed legislation that would take ethical commitments made by the government out of the realm of theory and into practice. Legislation introduced in February, for example, would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, and housing and require those companies to submit annual reports about assessments to the FTC.
Despite a lack of federal AI laws or regulations, the U.S. has agreed to uphold international principles established in 2019 by the Organization for Economic Cooperation and Development that call on makers and users of AI systems to be held accountable for them, and ensure they respect human rights and democratic values including privacy, non-discrimination, fairness, and labor rights. Those principles also called on AI builders and users to make sure that the systems are transparent, provide understandable and traceable explanations for their decisions, and are safe and secure.
In conjunction with the publication of the AI Bill of Rights, other federal agencies are expected to signal commitment to take actions reflective of its tenets. For example, the Department of Health and Human Services plans to release recommendations for methods to reduce algorithmic discrimination in health care AI and the Department of Education is planning to release recommendations on the use of AI for teaching and learning by early 2023.