Caitriona Fitzgerald is EPIC’s deputy director and Ben Winters is EPIC counsel.
The White House Office of Science and Technology Policy last week released a “Blueprint” for an “AI Bill of Rights.” While the principles set out in the blueprint do not have the force of law, there are several actions the White House can take to put them into practice within the federal government while simultaneously pushing for new legal protections. The Biden Administration should lead by example.
The major principles set out in the AI Bill of Rights are that AI systems must be safe, be effective, be free of discrimination, respect data privacy, make their use known, and have an extensive structure of human oversight.
Some have praised the blueprint laid out by OSTP, while others lament that it is toothless without laws or sufficient action. Both are right: The Office of Science and Technology Policy serves as an adviser to the president, so by setting out a strong set of principles, they are doing the most they can within their authority. But the White House and other agencies can work with OSTP in the “whole of government” to make policy changes based on principles laid out in the AI Bill of Rights.
The blueprint provides clear endorsement of several key protections that are in pending legislation but not yet enacted at the federal level. These include data minimization, which stands for the simple principle that entities should only collect the data necessary to perform a function an individual has requested, as well as a requirement to conduct independent testing to evaluate effectiveness and possible discriminatory impacts of algorithms. The blueprint even states that certain tools should not be used at all if testing indicates they are unsafe or ineffective, and that “[c]ontinuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.” The White House can support legislation, direct agency action, and lead by example by implementing these principles when AI tools are used by government actors.
At the launch event, two cabinet members announced specific new initiatives that align with the blueprint. Secretary of Education Miguel Cardona revealed upcoming efforts to publish guidance on the use of education technology such as automated proctoring systems, which place students under constant surveillance and have led to false accusations of cheating. Secretary of Health and Human Services Xavier Becerra announced an industry-wide survey of algorithms used in health care. These are good examples of the kinds of actions executive agencies can take to move the AI Bill of Rights principles into practice.
The president also has the ability to take direct action through executive order to ensure that the federal government puts the blueprint into action for existing government uses of these systems. President Biden should update Executive Order 13859, originally issued by President Trump in 2019, which ordered federal agencies to publish information by May 2021 about how they planned to regulate AI in compliance with principles previously laid out by OSTP. Very few agencies have complied with the order thus far. President Biden should now update it to require agencies to comply within the principles laid out in the blueprint, and the administration should ensure compliance with the updated order.
President Biden should also renew the urgency for agencies to comply with Executive Order 13960, also ordered by former President Trump, in 2020, which requires agencies to publish information about all AI systems they use and directs agencies to complete algorithmic impact assessments. Without a proper accounting of the AI tools in use by federal agencies today, it will be very difficult to implement the AI Bill of Rights.
Several current uses of AI clearly violate the blueprint and should no longer be used. The president should also stop encouraging agencies to spend American Rescue Plan funds on ShotSpotter and other “gunshot detection” technologies, which change police behavior but have not been shown to decrease gun violence. These tools are in violation of the blueprint’s principles that AI tools must be safe, effective, nondiscriminatory, and transparent.
Similarly, the Department of Justice continues to provide millions of dollars in grants for police technology, including almost $4 million in 2021. Our organization, the Electronic Privacy Information Center, as well as the NAACP Legal Defense Fund and several others, has called for an immediate stop to these grants and a review of what products the government has funded in order to determine whether they meet the standards of safe, effective, and equitable AI.
On the legislative front, the AI Bill of Rights principles are embodied in both the American Data Privacy Protection Act and the Algorithmic Accountability Act of 2022, both of which the administration could put its support behind.
There has been substantial investment in the development and adoption of AI, but nowhere near as much money or energy put toward safeguards or protection. We should not repeat the same self-regulatory mistakes made with social media and online advertising that left us in the privacy crisis we are in today. The Blueprint for an AI Bill of Rights sets out the principles that must be followed in order to ensure that the use of AI is fair, equitable, and nondiscriminatory. It’s time to ensure those principles are followed in practice.
The authors are staff members at EPIC, the Electronic Privacy Information Center. EPIC is a nonprofit research center that advocates for privacy, civil liberties, and protection against algorithmic discrimination.