Protocol | Policy

Transparency can help fix social media — if anyone can define it

The latest buzzword in tech policy promises to give users more insight into and power over social media services, but mainly signals just how much more we need to figure out.

Images of social media app icons blurred out behind translucent white text that reads "TRANSPARENCY"

Social media companies, lawmakers and tech skeptics all say they want more visibility into how the sites work.

Image: dole777/Unsplash; Protocol

It's the one and only thing nearly everyone in tech and tech policy can agree on. Facebook and Twitter want it, as does the Facebook Oversight Board. Whistleblower Frances Haugen suggested it to Congress, and several lawmakers who heard her testimony agreed. Even the FTC is on board.

The vogue in tech policy is "transparency," the latest buzzword for addressing concerns about social media's reach, breadth and social effects. Companies, academics, regulators and lawmakers on both sides of the aisle all embrace transparency as a cure-all, or at least a necessary first step. But that agreement obscures a deeper problem: The various camps all have widely differing notions both of what the vague term actually means, and also what the public should do with any insights increased transparency might lead to.

The idea that we should have more visibility into how companies such as Facebook and Google make their decisions has gone through periods of popularity before, especially after the Snowden revelations about government surveillance became public in 2013. Haugen's testimony that Facebook suppressed conclusions about its effect on a range of real-world problems, however, has put the term back in the spotlight — even as Congress continues its years-long struggle to come to agreement on more comprehensive regulations such as privacy.

The case for transparency basically goes like this: Powerful institutions, from countries to giant companies, should be held accountable as they deal with citizens and customers, particularly around individuals' ability to express themselves. Transparency seems like it would lead to that, as well as to create a path toward redress when these mega-actors do something untoward. After all, social media services can face legal consequences from government regulators like the FTC or SEC for misleading consumers or investors. While enforcement has often been uneven, many tech skeptics say the possibility of these consequences is particularly vital as opaque algorithms drive more of our digital lives. Increased transparency potentially could clarify which online problems are most urgent and how they can be fixed.

"Transparency becomes a building block under which you enable people to understand what's happening, to build trust," Nick Pickles, Twitter's senior director for global public policy strategy, development and partnerships, told Protocol. "You give people an understanding of what's happening on a service so that they can make decisions about appropriate policies."

Twitter recently issued what it called "guiding principles for regulation," which addressed issues like competition but said users should be able to understand platforms' rules and lawmakers should guide social media by providing "suitable flexibility for valuable disclosures," including to researchers.

Several major companies have already enacted reforms meant to increase transparency, after a fashion. Facebook discloses aggregate figures about takedowns of some harmful content and bot networks. It also has called for the reform of website liability shields, saying such reform would incentivize transparency. Facebook, Google and other sites allow users to see why they were served a particular ad. Twitter's Transparency Center puts out an array of data, including aggregate information on COVID-19 misinformation. And all major social media companies publish terms of service, often including separate community standards documents.

Yet Facebook's Oversight Board, which the company set up but which operates independently, complained about Facebook's "opaque rules" in the wake of Haugen's disclosures. The board in a blog post discussed users' need to access more information about what specific rules they may have broken and takedown requests by international governments. The general public should also have access to terms of service accurately translated into more languages and insight into the "whitelisting" of prominent and newsworthy accounts to exempt them from certain content moderation decisions.

The board titled its post: "To treat users fairly, Facebook must commit to transparency."

Nate Persily, the co-director of Stanford Cyber Policy Center, said being able to check the platforms' work will mean they can't lie to the public, and could prompt them to pull back from actions they felt the need to hide.

"If you force the platforms to open themselves to outside review, it will change their behavior," said Persily, who has proposed legislation to let scholars access information companies such as Facebook hold. "They will know they're being watched."

Persily also resigned last year from leading the independent effort to get Facebook to open up more data to researchers. The company eventually did so, but the data was flawed.

Sara Collins, policy counsel at tech policy advocacy group Public Knowledge, argued, however, that while offering the public a look into how Facebook, Twitter and other companies function can be useful for studying the sites, it may do little to combat individual users' concerns over issues like extremism online.

"I don't know that that meaningfully changes behaviors," Collins said. "I don't know that it reduces harm in any significant way, and it sure doesn't incentivize the companies to change anything about what they're doing."

Collins compared transparency measures to privacy notices, which users rarely read and even more rarely understand. With few choices, users usually just click whatever they need to in order to install an app or use a service. Collins noted, however, "transparency" goes beyond how social media companies handle individuals' posts.

Lawmakers, researchers and advocates often push for deeper information about the advertising that generates revenue and the algorithms that structure everything from users' feeds to checking for copyrighted material. One bill from earlier this year, for instance, would require online platforms "to maintain detailed records describing their algorithmic process for review" by the FTC. Another bill would force large platforms to give researchers and the FTC access to more detailed ad libraries than companies currently put out, including a description of the target audience and information about how many people interacted with the ad.

Democratic Sen. Richard Blumenthal, who led the hearings where Haugen testified, echoed her suggestion that policy-makers should have access to the kinds of internal research she disclosed, which looked into the effects of Instagram on young users, among other issues. At the hearing, Blumenthal said he planned to pursue "compelling recommendations about requiring disclosures of research and independent reviews of these platforms' algorithms."

All the buzz about transparency reflects several concerns bubbling up at once, with some as small as a single post taken down for hate speech and others as momentous as how algorithms might drive political polarization and violence around the world.

"All of these conversations are kind of happening in parallel to each other," said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy & Technology. "They're all different strains of transparency, but the devil is in the details."

Many of the definitions of transparency could require companies to hand over vast amounts of data, some of it proprietary. Twitter allows researchers to gather huge datasets and has considered a shift to open-source ranking algorithms, but said privacy safeguards are necessary in any transparency offering and burdensome disclosure mandates could hurt small businesses.

In some cases, social media sites have already balked at disclosing more, particularly when the information ends up in the hands of people with little incentive to portray the companies in a positive light.

Over the summer, for instance, Facebook suspended the accounts of New York University researchers who had been studying disinformation and political ads on the platform. The move prompted accusations that the company was trying to squash unflattering conclusions. Facebook claimed its $5 billion settlement with the FTC, for privacy violations in the wake of the Cambridge Analytica scandal, required it to block the research .

The FTC eventually weighed in, blasting Facebook's rationale and siding with the academics in favor of more transparency.

"The consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest," Samuel Levine, the acting director of the bureau of consumer protection, wrote in a letter to Mark Zuckerberg. "Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising."

As the FTC suggested, access for researchers to the inner workings of companies has become the version of transparency that many tech skeptics hope for. They say that groups of specially vetted academics — or even a new U.S. government regulator — could bring expertise to examining massive, complex algorithms or working on cross-platform problems like the spread of disinformation. Limiting access to researchers or the government could also lessen concerns about the privacy of so much data and analysis circulating in the world.

Yet even when offering more data to researchers, penetrating visibility can only go so far to "solve" social media's problems. Rather, advocates for transparency say the clarity can help hold companies to account, but doesn't replace the process of further action, such as a federal privacy law.

"We need this transparency so lawmakers actually know what's going on," Vogus said.

Issie Lapowsky contributed reporting.

Protocol | Enterprise

Meta thinks it can now use smaller data sets to flag Facebook content

Despite the constant deluge of content flowing into Facebook and Instagram, Meta has struggled to get enough data to train AI to spot harmful content, so it’s banking on an emerging approach.

Meta plans to announce that few-shot learning shows promise in its constant battle to weed out disinformation or other content that violates its policies on Facebook and Instagram.

Image: Meta

After a terrorist attack on a mosque in Christchurch, New Zealand was livestreamed on Facebook in 2019, its parent company, now called Meta, outfitted London police officers with body cams while they conducted terrorism training. At the time, Meta said there wasn’t enough video data to train its artificial intelligence systems to detect and remove violent content, so it hoped the body cam project would produce more of that scarce AI training data.

A year prior to that horrific incident, the company acknowledged that it failed to keep up with inflammatory posts from extremist groups in Myanmar. Again, it said the problem was a lack of data — there wasn’t enough content in Burmese to train algorithmic moderation systems to spot more of it.

Keep Reading Show less
Kate Kaye
Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign ’08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

In a tight labor market, businesses are competing for top talent, even as employees leave in droves. A record 4.4 million Americans resigned in September 2021 — the highest on record for nearly 20 years — ushering in what some call the Great Resignation. That same month, 65% of U.S. workers said they were looking for a new job.

Business leaders have to respond to mitigate the negative impacts of this disruptive churn, with 36% of CFOs saying they're very concerned about turnover remaining high indefinitely and weighing on revenue growth. The answers to this challenge should be informed by the root causes of employee dissatisfaction as well as retention drivers.

Keep Reading Show less
Suneet Dua, PwC
As PwC’s US Products & Technology Chief Revenue and Growth Officer, Suneet Dua is responsible for driving more than $1 billion in product revenue and executing PwC’s product revenue strategy. He’s focused on driving innovation, delivering world-class, forward-thinking products and digitally upskilling the workforce and society at large. With 20+ years of technology, media and entertainment industry experience, he’s positioned as a catalyst for organizational transformation and delivers on the firm’s promise to solve the world’s most important problems. Additionally, he launched Salesforce and client-focused centers of excellence, such as our Cybersecurity centers in Israel, Singapore and India––all to improve the way PwC serves its clients. During his tenure as US Chief Product Leader, Suneet, and his team, played a critical role in designing and implementing digital tools that upskilled more than 55,000 of its US employees, which led to the development of PwC’s digital learning platform, ProEdge, that addresses the digital skills gap crisis facing today’s workforce. He also serves as a board member of PwC’s Trifecta Consulting (US, China, Japan and Mexico). Previously, Suneet served on PwC’s US leadership team and was Global Client Market Leader for PwC’s Global Network.
Protocol | Workplace

What will work look like in 2022? Glassdoor makes four predictions.

Tech companies will continue to have trouble hiring workers.

According to a report from Glassdoor, local companies will also have to pay more to compete with companies that are offering San Francisco or New York rates to remote workers.

Photo: MoMo Productions/Getty Images

2021 was a difficult but pivotal year for tech workers and employers alike. We’ve got mixed news: 2022 will likely continue to be difficult but perhaps a little more, well, precedented.

Glassdoor released four predictions for the workplace of 2022 Wednesday based on data it gathered from reviews, salaries and conversations happening on its site, as well as economic trend data. Here’s what the career platform sees in the workplace crystal ball.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Protocol | Enterprise

COVID-19 kickstarted a war over web accessibility

The pandemic spurred demand for a more accessible web, but experts and practitioners disagree on the best approach to get there.

Experts and practitioners disagree on the best approach to building an accessible web.

Image: alexsl/Getty Images

The pandemic triggered a surge in demand for technology that helps companies adapt their websites for users with disabilities as businesses scrambled to accommodate customers who were now forced to do almost everything online.

This period gave a boost to companies such as AudioEye, EqualWeb and Deque, which offer accessibility services like alternative text that describes images for visually impaired users. But it also sparked a war over the best way to build a more accessible web, with one side arguing the fastest way to achieve change is to put accessible overlays onto existing sites, and the other arguing the web will never be truly accessible until developers build it that way from the start.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, emerging technologies and startups. She is a graduate of the University of Southern California, where she studied business and philosophy. She can be reached at acounts@protocol.com.

Hirsh Chitkara

Hirsh Chitkara (@ChitkaraHirsh) is a is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Latest Stories
Bulletins