Tom Siegel is the CEO and co-founder of Trust Lab.
In the era of social media, AI algorithms have the power to decide everything from our playlists to the videos we watch, the news we consume and what special shopping deals we’re offered, and which are withheld. For all the good machine-learning technologies and algorithms do to improve and personalize the online experience for all of us, they also present one of the biggest threats for online safety, with real-world negative implications for the health and well-being of all internet users.
Used without enough care and in the absence of safety guardrails and programs, algorithms often harm the very users they are supposed to serve. Predictive algorithms may spur platforms to continually serve up content related to thinness and eating disorders, suggest that you watch more extremist content or amplify polarizing beliefs. They may also radicalize people through narrow filter bubbles that selectively guess what information someone would like to see based on the user’s information and echo chambers that amplify beliefs through communication and repetition.
The effects of ML bias are similarly frightening. Bias negatively affects left and right, liberals and conservatives — no group is spared — and the effects of extremist, polarizing and unhealthy content online on human behavior in the real world have been well documented.
So how do we encourage the good that ML and AI can provide while restraining potentially harmful effects?
Ban algorithmic recommendations for children and make them an “opt-in” choice for adults. We still understand too little about the ways algorithms push harmful content and the damage that causes. A more cautious approach, especially for vulnerable groups, should be a safety priority.
Bake routine and systemic system testing into the development process. Independently certify new algorithmic recommendation systems prior to launch to confirm intent and positive impact match. Scientific proof that a technical system does what it is designed to do without negative side effects on users is a standard guardrail for consumer protection in almost every industry.
Be transparent with the state of systems and risks for users. Recommendations systems have a huge impact on very large numbers of users. As such, it’s imperative that the general public understands how these systems behave. Measurement, testing, trade-offs, inputs and observed impact should be made available for users, regulators and academics for analysis.
Pass constructive regulation. The Impact is too large and the risk for harm too severe for it to remain an unregulated space. The same way we test new drugs, cars or nuclear power plants, we need sensible regulation that protects the general public from harm.
ML/AI is the most important technology of our generation. It is not the responsibility of any one organization or entity to guide responsible development and ensure monitoring and measurement. It will require the combined collaboration of private companies, public entities, advocates and policymakers.
If everyone works together and starts taking proactive steps, then we can all benefit from the positive outcomes AI and ML afford while mitigating potential harm.