Enterprise

Not my job: AI researchers building surveillance tech and deepfakes resist ethical concerns

The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.

Teacher explaining something to two college students in a robotics class

Dismissive attitudes toward ethical considerations can hinder business goals to operationalize principles promised in splashy mission statements and press releases.

Photo: Hispanolistic/E+/Getty Images

Computer vision forms the foundation for AI-based technology products, offering immense potential for helping spot disease symptoms or ensuring that autonomous vehicles accurately recognize objects on the road. But the same techniques also form the underpinnings of tech with immense potential for personal harm and societal damage — from discriminatory facial recognition-fueled surveillance and disinformation-spreading deepfakes to controversial tech used to detect people’s emotional states.

The possible negative impacts of AI that computer vision researchers help bring to life are getting more attention, prompting AI businesses to emphasize the importance of ethical considerations guiding how their products are built. Yet over the last several years, the computer vision community has been reluctant to recognize connections between the research advancements and cool math problem-solving achievements celebrated at one of its most prestigious annual conferences, and the possible uses for that tech once it is baked into apps and software products.

This year, that began to change, albeit slowly.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year“strongly encouraged” researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.

“Because of the much more real impact that computer vision is playing in people's lives, we instituted this process for the authors to discuss both the limitations of their papers [and] also potential social limits,” said Dimitris Samaras, a program chair of this year’s CVPR conference, and a professor and director of the Computer Vision Lab at Stony Brook University.

“It’s mostly so that people – authors – are forced to think and frame their work in a way that impacts are recognized as early as possible, and if necessary, [mitigated],” Samaras told Protocol.

Not my job

The policy shift ruffled some feathers. Academics are “super aware” of the potential impact of their research on the real world, said one conference attendee who asked not to be named. However, he said, because researchers cherish their academic freedom, asking them to predict the future applications for research that could be in very early stages and years away from viability in products restricts that independence.

“They are not good at telling you what the applications of their research are. It’s not their job,” he said.

“That is exactly what pisses me off,” said Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute, and a researcher with a Ph.D. in computer vision. “[Computer vision researchers] have convinced themselves that it’s not their job.”

While presenting a workshop on fairness, accountability, transparency and ethics in computer vision at CVPR in 2020, Gebru said she experienced what she considered a general disregard for ethical considerations and the human rights impacts of computer vision-based technologies used for border surveillance, autonomous and drone warfare and law enforcement.

Gebru told Protocol she is now “done” with CVPR and has soured on the computer vision field because of the “inability for them to be introspective.”

“We personally believe it is the researcher’s job,” Samaras said, regarding consideration of computer vision’s ethical implications.

This isn’t just a research problem though. Some AI practitioners say that the ethics disconnect continues past the research phase as people like the young computer scientists vying for tech jobs at CVPR make their way into the ranks of corporate AI. There, dismissive attitudes toward ethical considerations can hinder business goals to operationalize ethics principles promised in splashy mission statements and press releases.

“They are not good at telling you what the applications of their research are. It’s not their job.”

“I think that was one of my frustration points in my tech career,” said Navrina Singh, a computer engineer and founder and CEO of Credo AI, which sells software for keeping track of data governance and audit reviews in the machine learning development process.

“As technologists, we were incentivized to build the highest-performing systems and put them out on the market quickly to get business outcomes,” said Singh. “And anytime we would talk about compliance and governance, the technologists were like, ‘Oh, this is not my problem. That's not my space. That's not my incentive structure.’”

Avoiding radical change

CVPR attendance has doubled since five years ago; this year’s show attracted around 10,000 attendees, over half of whom participated in person, according to conference organizers.

The 2022 CVPR conference was held at the convention center in New Orleans, where a growing number of surveillance cameras installed throughout the city are plugged into a real-time law enforcement crime center. The city is currently considering lifting a ban on facial recognition and other surveillance tech established just two years ago.

In its new ethics guidelines, CVPR organizers listed some examples of negative impacts of computer vision. “Could it be used to collect or analyze bulk surveillance data to predict immigration status or other protected categories, or be used in any kind of criminal profiling?” they asked. “Could it be used to impersonate public figures to influence political processes, or as a tool of hate speech or abuse?”

Left: Conference attendees in front of a large PowerPoint presentation. Right: A flyer advertising internships at Amazon. Left: Computer vision researchers attend a workshop held at CVPR in New Orleans. Right: Amazon Science recruited interns at the CVPR conference, where the company held workshops on its Amazon Go computer vision tech.Photos: Kate Kaye/Protocol

Some researchers who presented their work at the conference acknowledged the possible downsides. In a paper about high-resolution face-swapping via latent semantics, researchers wrote, “Although not the purpose of this work, realistic face swapping can potentially be misused for deepfakes-related applications.” To limit the deepfake potential of their research, the authors proposed restricting how the model is released for use and developing deepfake-detection techniques.

However, because CVPR merely encouraged researchers to include an impact assessment in their papers, and did not require them to include that information in their published papers available for viewing outside the conference review process, many make no mention of the ethical implications of their work. For example, another publicly available research paper accepted at this year’s conference, detailing region-aware face-swapping — which can be used to enable deepfakes — does not include any social impact statements.

In fact, researchers were only asked to tell reviewers whether or not their work might have a social impact. “You could say that it's a pure math paper [so] there isn't social impact. If reviewers agree with you, there's nothing to say,” Samaras said.

Some researchers bristle at the increased concern around ethics, in part because they are producing incremental work that could have many future applications, just like any tool might.

“It’s not the techniques that are bad; it’s the way you use it. Fire could be bad or good depending on what you are doing with it,” said François Brémond, a cognitive and computer vision researcher and research director at Inria, the French national research institute for digital science and technology, in an interview at the CVPR conference.

Brémond suggested there is too much focus on potentially negative uses of some computer vision research, particularly when it is designed to help people. His current work involves the use of computer vision to detect key points on faces to gauge subtle changes in expressions of autistic individuals or people with Alzheimer’s. The early-stage research could help decipher signs of internal changes or symptoms and help health care workers better understand their patients, he said.

Controversy over facial expression detection and analysis software led Microsoft to pull it from general use, but retain it in an app used to help people with vision impairment.

“It’s not the techniques that are bad; it’s the way you use it. Fire could be bad or good depending on what you are doing with it.”

Brémond said he saw no reason to include a social impact section in a paper he presented at CVPR because it addressed generalized video action-detection research rather than something directly related to a specific use. The research had no “direct, obvious link to a negative social impact,” Brémond wrote in an email last week. He explained that he is already required to provide information to Inria’s administration regarding the ethical issues associated with his research.

It’s no wonder CVPR program chairs — including Samaras and Stefan Roth, a computer science professor in the Visual Inference Lab at Germany’s Technical University of Darmstadt — aren’t pushing too hard.

“Our decision to make that gradual was a conscious decision,” said Roth. “The community as a whole is not at this point yet. If we make a very radical change, then the reviewers will not really know how to basically take that into account in the review process,” he said, referencing those who review papers submitted to the conference.

“We were trying to break a little bit of ground in that direction. And it's certainly not going to be the last version of that for CVPR,” Roth said.

Changing hearts and minds may come, but slowly, said Olga Russakovsky, an assistant professor in Princeton University’s department of computer science, during an interview at the conference where she gave a presentation on fairness in visual recognition.

“Most folks here are trained as computer scientists, and computer science training does not have an ethics component,” she said. “It evokes this visceral reaction of, ‘Oh, I don't know ethics. And I don't know what that means.’”

A tale of two conferences

The vast majority of tutorials, workshops and research papers presented at CVPR made little or no mention of ethical considerations. Instead, trending subjects included neural rendering and the use of multimodal data to train large machine learning models, or data that comes in a variety of modes such as text, images and videos.

One particularly hot topic this year: a neural network from OpenAI that learns visual concepts from natural language supervision called CLIP, or contrastive language-image pre-training.

“It's getting much more on the radar of a lot of people,” said Samaras, noting that he counted 20 papers presented at CVPR that incorporated CLIP.

CLIP happened to be a topic of conversation at another AI conference, in Seoul, during the same week in late June when CVPR was held. But in this case, CLIP was not celebrated.

“CLIP is an English-language model trained on internet content gathered based on data from an American website (Wikipedia), and our results indicate that CLIP reflects the biases of the language and society which produced the data on which it was trained,” researchers wrote in a paper they presented at FAccT. The growing global conference is dedicated to research focused on fairness, accountability and transparency in sociotechnical systems such as AI.

While FAccT surely reached its endemic audience of AI ethics researchers, more than 2,000 people from the computer vision community who may have learned from that ethics-focused conference — including 460 from South Korea — were thousands of miles away in New Orleans at CVPR, advancing their craft with relatively minimal concern for the societal implications of their work. If anything, the physical separation of the simultaneous events symbolized the disconnect between the computer scientists pushing computer vision ahead and the researchers hoping to infuse it with ethical considerations.

But FAccT organizers hope to spread their message beyond the ethics choir, said Alice Xiang, a general co-chair of this year’s FAccT conference and head of Sony Group’s AI ethics office. “One of the goals we had as organizers of that is to try to make it as much of a big tent as possible. And that is something that we do sometimes worry about: whether practitioners who actually develop AI technologies might feel that this is just a conversation for ethicists.”

But cross-pollination could be a long time coming, Xiang said.

“We're still at a point in AI ethics where it's very hard for us to properly assess and mitigate ethics issues without the partnership of folks who are intimately involved in developing this technology,” she said. “We still have a lot of work to do in that intersection in terms of bringing folks along and making them realize some of these issues.”

Entertainment

Google TV will gain fitness tracker support, wireless audio features

A closer integration with fitness trackers is part of the company’s goal to make TVs a key pillar of the Android ecosystem.

Making TVs more capable comes with increasing hardware and software requirements, leading Google to advise its partners to build more-capable devices.

Photo: Google

Google wants TV viewers to get off the couch: The company is working on plans to closely integrate its Android TV platform with fitness trackers, which will allow developers to build interactive workout services for the living room.

Google representatives shared those plans at a closed-door partner event last month, where they painted them as part of the company’s “Better Together” efforts to build an ecosystem of closely integrated Android devices. As part of those efforts, Google is also looking to improve the way Android TV and Google TV devices work with third-party audio hardware. (Google launched Android TV as an Android-based smart TV platform in 2014; in 2020, it introduced Google TV as a more content-centric smart TV experience based on Android TV.)

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Sponsored Content

How Global ecommerce benefits American workers and the U.S. economy

New research shows Alibaba’s ecommerce platforms positively impact U.S. employment.

The U.S. business community and Chinese consumers are a powerful combination when it comes to American job creation. In addition to more jobs, the economic connection also delivers enhanced wages and a growing GDP contribution on U.S. soil, according to a recent study produced by NDP Analytics.

Alibaba — a leading global ecommerce company — is a particularly powerful engine in helping American businesses of every size sell goods to more than 1 billion consumers on its digital marketplaces in China. In 2020, U.S. companies completed more than $54 billion of sales to consumers in China through Alibaba’s online platforms.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

What the fate of 9 small tokens means for the crypto industry

The SEC says nine tokens in the Coinbase insider trading case are securities, but they are similar to many other tokens that are already trading on exchanges.

While a number of pieces of crypto legislation have been introduced in Congress, the SEC’s moves in court could become precedent until any legislation is passed or broader executive actions are made.

Illustration: Christopher T. Fong/Protocol

When the SEC accused a former Coinbase employee of insider trading last month, it specifically named nine cryptocurrencies as securities, potentially opening the door to regulation for the rest of the industry.

If a judge agrees with the SEC’s argument, many other similar tokens could be deemed securities — and the companies that trade them could be forced to be regulated as securities exchanges. When Ripple was sued by the SEC in late 2020, for example, Coinbase chose to suspend trading the token rather than risk drawing scrutiny from federal regulators. In this case, however, Coinbase says the nine tokens – seven of which trade on Coinbase — aren’t securities.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

Enterprise

Werner Vogels: Enterprises are more daring than you might think

The longtime chief technology officer talked with Protocol about the AWS customers that first flocked to serverless, how AI and ML are making life easier for developers and his “primitives, not frameworks” stance.

"We knew that if cloud would really be effective, development would change radically."

Photo: Amazon

When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.

But it was enterprises that flocked to serverless first, Amazon’s longtime chief technology officer told Protocol in an interview last week.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Climate

Dark money is trying to kill the Inflation Reduction Act from the left

A new campaign is using social media to target voters in progressive districts to ask their representatives to vote against the Inflation Reduction Act. But it appears to be linked to GOP operatives.

United for Clean Power's campaign is a symptom of how quickly and easily social media allows interest groups to reach a targeted audience.

Photo: Anna Moneymaker/Getty Images

The social media feeds of progressive voters have been bombarded by a series of ads this past week telling them to urge their Democratic representatives to vote against the Inflation Reduction Act.

The ads aren’t from the Sunrise Movement or other progressive climate stalwarts, though. Instead, they’re being pushed by United for Clean Power, a murky dark money operation that appears to have connections with Republican operatives.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins