It's the one and only thing nearly everyone in tech and tech policy can agree on. Facebook and Twitter want it, as does the Facebook Oversight Board. Whistleblower Frances Haugen suggested it to Congress, and several lawmakers who heard her testimony agreed. Even the FTC is on board.
The vogue in tech policy is "transparency," the latest buzzword for addressing concerns about social media's reach, breadth and social effects. Companies, academics, regulators and lawmakers on both sides of the aisle all embrace transparency as a cure-all, or at least a necessary first step. But that agreement obscures a deeper problem: The various camps all have widely differing notions both of what the vague term actually means, and also what the public should do with any insights increased transparency might lead to.
The idea that we should have more visibility into how companies such as Facebook and Google make their decisions has gone through periods of popularity before, especially after the Snowden revelations about government surveillance became public in 2013. Haugen's testimony that Facebook suppressed conclusions about its effect on a range of real-world problems, however, has put the term back in the spotlight — even as Congress continues its years-long struggle to come to agreement on more comprehensive regulations such as privacy.
The case for transparency basically goes like this: Powerful institutions, from countries to giant companies, should be held accountable as they deal with citizens and customers, particularly around individuals' ability to express themselves. Transparency seems like it would lead to that, as well as to create a path toward redress when these mega-actors do something untoward. After all, social media services can face legal consequences from government regulators like the FTC or SEC for misleading consumers or investors. While enforcement has often been uneven, many tech skeptics say the possibility of these consequences is particularly vital as opaque algorithms drive more of our digital lives. Increased transparency potentially could clarify which online problems are most urgent and how they can be fixed.
"Transparency becomes a building block under which you enable people to understand what's happening, to build trust," Nick Pickles, Twitter's senior director for global public policy strategy, development and partnerships, told Protocol. "You give people an understanding of what's happening on a service so that they can make decisions about appropriate policies."
Twitter recently issued what it called "guiding principles for regulation," which addressed issues like competition but said users should be able to understand platforms' rules and lawmakers should guide social media by providing "suitable flexibility for valuable disclosures," including to researchers.
Several major companies have already enacted reforms meant to increase transparency, after a fashion. Facebook discloses aggregate figures about takedowns of some harmful content and bot networks. It also has called for the reform of website liability shields, saying such reform would incentivize transparency. Facebook, Google and other sites allow users to see why they were served a particular ad. Twitter's Transparency Center puts out an array of data, including aggregate information on COVID-19 misinformation. And all major social media companies publish terms of service, often including separate community standards documents.
Yet Facebook's Oversight Board, which the company set up but which operates independently, complained about Facebook's "opaque rules" in the wake of Haugen's disclosures. The board in a blog post discussed users' need to access more information about what specific rules they may have broken and takedown requests by international governments. The general public should also have access to terms of service accurately translated into more languages and insight into the "whitelisting" of prominent and newsworthy accounts to exempt them from certain content moderation decisions.
The board titled its post: "To treat users fairly, Facebook must commit to transparency."
Nate Persily, the co-director of Stanford Cyber Policy Center, said being able to check the platforms' work will mean they can't lie to the public, and could prompt them to pull back from actions they felt the need to hide.
"If you force the platforms to open themselves to outside review, it will change their behavior," said Persily, who has proposed legislation to let scholars access information companies such as Facebook hold. "They will know they're being watched."
Persily also resigned last year from leading the independent effort to get Facebook to open up more data to researchers. The company eventually did so, but the data was flawed.
Sara Collins, policy counsel at tech policy advocacy group Public Knowledge, argued, however, that while offering the public a look into how Facebook, Twitter and other companies function can be useful for studying the sites, it may do little to combat individual users' concerns over issues like extremism online.
"I don't know that that meaningfully changes behaviors," Collins said. "I don't know that it reduces harm in any significant way, and it sure doesn't incentivize the companies to change anything about what they're doing."
Collins compared transparency measures to privacy notices, which users rarely read and even more rarely understand. With few choices, users usually just click whatever they need to in order to install an app or use a service. Collins noted, however, "transparency" goes beyond how social media companies handle individuals' posts.
Lawmakers, researchers and advocates often push for deeper information about the advertising that generates revenue and the algorithms that structure everything from users' feeds to checking for copyrighted material. One bill from earlier this year, for instance, would require online platforms "to maintain detailed records describing their algorithmic process for review" by the FTC. Another bill would force large platforms to give researchers and the FTC access to more detailed ad libraries than companies currently put out, including a description of the target audience and information about how many people interacted with the ad.
Democratic Sen. Richard Blumenthal, who led the hearings where Haugen testified, echoed her suggestion that policy-makers should have access to the kinds of internal research she disclosed, which looked into the effects of Instagram on young users, among other issues. At the hearing, Blumenthal said he planned to pursue "compelling recommendations about requiring disclosures of research and independent reviews of these platforms' algorithms."
All the buzz about transparency reflects several concerns bubbling up at once, with some as small as a single post taken down for hate speech and others as momentous as how algorithms might drive political polarization and violence around the world.
"All of these conversations are kind of happening in parallel to each other," said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy & Technology. "They're all different strains of transparency, but the devil is in the details."
Many of the definitions of transparency could require companies to hand over vast amounts of data, some of it proprietary. Twitter allows researchers to gather huge datasets and has considered a shift to open-source ranking algorithms, but said privacy safeguards are necessary in any transparency offering and burdensome disclosure mandates could hurt small businesses.
In some cases, social media sites have already balked at disclosing more, particularly when the information ends up in the hands of people with little incentive to portray the companies in a positive light.
Over the summer, for instance, Facebook suspended the accounts of New York University researchers who had been studying disinformation and political ads on the platform. The move prompted accusations that the company was trying to squash unflattering conclusions. Facebook claimed its $5 billion settlement with the FTC, for privacy violations in the wake of the Cambridge Analytica scandal, required it to block the research .
The FTC eventually weighed in, blasting Facebook's rationale and siding with the academics in favor of more transparency.
"The consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest," Samuel Levine, the acting director of the bureau of consumer protection, wrote in a letter to Mark Zuckerberg. "Indeed, the FTC supports efforts to shed light on opaque business practices, especially around surveillance-based advertising."
As the FTC suggested, access for researchers to the inner workings of companies has become the version of transparency that many tech skeptics hope for. They say that groups of specially vetted academics — or even a new U.S. government regulator — could bring expertise to examining massive, complex algorithms or working on cross-platform problems like the spread of disinformation. Limiting access to researchers or the government could also lessen concerns about the privacy of so much data and analysis circulating in the world.
Yet even when offering more data to researchers, penetrating visibility can only go so far to "solve" social media's problems. Rather, advocates for transparency say the clarity can help hold companies to account, but doesn't replace the process of further action, such as a federal privacy law.
"We need this transparency so lawmakers actually know what's going on," Vogus said.
Issie Lapowsky contributed reporting.