In which industry would issues with AI ethics be felt most and why?

Financial services, health care and social media are among the industries and businesses that members of Protocol's Braintrust believe could be hurt most if their AI uses were unethical.
Co-founder and Co-Chairman at Two Sigma
The data-rich health care field is ripe for AI-driven innovation, though important ethical questions about the balance between privacy and social good are still unresolved. If we look at the coronavirus pandemic, for example, we know that finding a vaccine and therapeutic treatments are the long-term solution. But in the near term, robust data and powerful AI algorithms can help policymakers design effective strategies for reopening society that properly balance public health with economic needs.
Having the right data and AI methods would give us a range of capabilities to fight this virus; contact tracing at scale is often discussed. But the potential goes much deeper. We could also more rapidly answer such key questions as: Who is most susceptible to the virus and needs extra protection? What social distancing strategies are most effective? Who should be tested, how often, and how can the results be used most effectively to suppress transmission? Answering these and other crucial questions is possible with the right data and AI. But doing so will require striking an ethical policy balance between people's desire to keep their data private and researchers' need for that data in order to improve outcomes. It's a shame that our policy framework wasn't ready for coronavirus; the benefits to our fight against it — and to health care as a whole — would be immense.
Global Lead, Responsible AI at Accenture
The financial services industry has been one of the first to move intelligently to think about how to address ethical concerns raised by AI/ML. This is unsurprising, given the history of model risk management and model audits that are mandated by regulatory bodies around the world.
Financial markets have lived through the impact of malicious or unintentional misuse of models. The industry is also quite aware of the potential for unintentional harm due to bias in data and inappropriate model use. In the U.S., SR 11-7 guidelines, written in 2011, read as if they were written by an AI ethics expert today. At a global level, the Basel III guidelines introduced frameworks that serve as a starting point for ethical culture.
However, AI raises new questions: For example, the use of black-box models makes it difficult for us to determine why decisions are being made. Paradoxically, traditional limitations on accessing data on protected groups can hinder the ability to assess models properly.
While there is still room to improve, there is also a lot to learn from the financial services industry. Ethical issues with AI will be faced by all industries, but in waves relative to the maturity of their AI, riskiness of their applications, the probability of harm, and the level of impact if an ethical breach occurs.
Senior Analyst at Altimeter
Think about all the ways you've interacted with technology today. You've probably checked email, Twitter and Instagram. Maybe you've searched on Google or bought something online or used a mobile app to deposit a check or swiped right on a dating site. Maybe you're wearing a fitness tracker or asked a smart speaker to play your favorite music. Maybe you applied for a loan. For a job.
Few of these experiences would have existed in this form 40 — even 20 — years ago. And all are driven in large part by AI. Today, what we call AI often determines whether you can unlock your phone or buy something securely, what you see online and what you don't. If you end up in the criminal justice system, it "predicts" your chances of re-offending and decides how long your sentence is or if you get parole. It requires a dizzying amount of data about you, and it "learns" over time.
Intelligent technologies have tremendous potential for business and society, but they're far from perfect. AI doesn't understand human context. Errors amplify bias and disproportionately affect vulnerable groups. Smart speakers may not understand accents, dialects, women. Face recognition is much less accurate for darker-skinned people. It's difficult if not impossible (at least today) to interpret why an algorithm came to a conclusion. There's often minimal transparency (if any) about where and how it is used. There are few if any legal protections. So when we think about the extent to which AI is already part of our lives and how that will only accelerate over time, there is no single industry to point to. The answer is: all of it.
Director of Engineering and Services at Excella
The health care industry is enormous. In the U.S., health care costs exceeded $3.65T in 2018, more than 17% of the GDP. This represents a potential for AI to improve the quality of life for millions of people.
We have already seen AI starting to be implemented in many aspects of this gigantic sector. Dozens of algorithms have been approved by the FDA in medical imaging. AI is being used to optimize supply chain, manufacturing processes and back-office operations. Recently, in February 2020, Halicin became the first antibiotic developed with AI techniques.
As AI helps develop new treatment protocols, medicine and medical devices, patients' needs will be a primary area of consideration. Consumer surveys on AI products have shown that people are still skeptical, or even suspicious of AI. In a survey response for self-driving cars, it is clear that consumers are especially hesitant to allow AI to make life-or-death decisions on their behalf. Informed consent is a central tenet in medical ethics, and communicating AI's role in medical intervention will be a new and inevitable challenge facing our health care system. Process will need to be installed to help patients understand the risks and benefits of these emerging technologies. Non-AI treatment options should still be offered wherever possible as an alternative.
Founder at Susan Liautaud & Associates Limited; Founder at The Ethics Incubator
AI will be felt across virtually all industries. Stanford professor Andrew Ng (among his many former and current titles) described it as today's electricity. I am particularly focused on global public health and health care diagnostics, but it will underpin both industry and governments. So it's important not to limit the debate to industry or separate industry and government. It's clear that it's affecting consumer goods across the board, the tech industry across the board, etc.
One way to consider the question is which issues have the greatest impact on society. I would say social media (from targeting to the spread of fake information) and health care (due to the life and death circumstances).
See who's who in Protocol's Braintrust. (Updated April 22, 2020)
Questions, comments or suggestions? Email braintrust@protocol.com.
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.