Policy

Time-ordered feeds won't fix Facebook because we're the problem

Whistleblower Frances Haugen advocated for returning to chronological feeds as a way to escape extreme and divisive content on social media. Yet researchers say it might not be up to the task of pushing back against the kinds of content users seek.

The facebook logo displayed on a smartphone screen, in front of a backdrop of the Facebook logo.

Chronological feeds are among Frances Haugen's proposals.

Photo illustration: Igor Golovniov/Getty Images

Combating misinformation, division and abuse on Facebook and Instagram may be as simple as going back to the site's roots and showing posts in the order they happened. At least, that's what whistleblower Frances Haugen suggested in congressional testimony last week.

The former Facebook product manager made a splash by using the company's own internal research to lend support to the longtime criticism that, in order to hold and monetize user attention, the services' algorithms prioritize shocking and extreme content. Among her suggested fixes was doing away with this method of ordering content, and instead presenting chronologically whatever has popped up most recently in users' networks.

"We don't want computers deciding what we focus on," Haugen said.

Researchers who study how controversial content spreads on social media say such a move could lessen harm without needing to wait for a divided Congress to find consensus on a regulatory approach. Yet the experts also caution that chronological feeds come with their own problems and likely don't go nearly far enough to tackle issues that are endemic to Facebook and other services.

"We shouldn't kid ourselves that it solves everything," said Callum Hood, head of research at the Center for Countering Digital Hate, which studies hate and disinformation online.

Looking back

Tweaking the programming behind how Facebook and Instagram curate the posts that appear in user feeds at first seems to be an elegant solution to a widespread problem. Essentially, the proposal would add a little bit of friction to the interactions that social media has until now constantly worked to speed up — leaving Facebook still profitable, if not quite "ludicrously" so, as Haugen put it.

Unfortunately, a time-ordered feed might not actually tamp down on all of the problems Haugen's testimony zeroed in on, experts caution, especially misinformation or the kinds of content that threaten the mental health of some young Instagram users.

Users looking at chronological feeds could still see plenty of falsehoods about vaccines and elections, as well as the influencer posts that can shred young users' self-esteem. In fact, health and civic misinformation appear to have spread widely even though, as Haugen contended, Facebook made them ineligible to surface at times last year based on high engagement.

"If you went and followed one anti-vaccine page on Facebook, it would recommend more under its Related Pages feature," said Hood, whose organization studied the spread of misinformation about COVID-19 and vaccines. "We see the Instagram app does similar."

Facebook, Hood added, would need to make "pretty wide-ranging changes" to stop feeding users problematic content, particularly when they've already expressed interest in it. Facebook users might also still see misinformation or divisive posts arising organically from friends and relatives, or bouncing across various services like Twitter and TikTok.

"The toxicity is now embedded in the network structure itself," said Kate Starbird, a professor at the University of Washington's Department of Human Centered Design & Engineering.

Time-ordered feeds have their own discontents too: They tend to reward people and brands that do little more than post frequently, even with the spam controls Haugen suggested in place.

"In a purely chronological [feed], to get your stuff to the top, you just put out more and more crap," Starbird said. "It really is going to reward volume."

Facebook launched with a chronological feed but abandoned it in steps, over the years. The company specifically cited a desire to provide more relevant content to users as it stamped out most time-ordered curation in 2014. The changes prompted some user anger at the time, and there are users who try to filter Facebook by "most recent," but for the most part algorithmic curation has become industry standard.

The biggest exception among major platforms is Twitter, which moved toward an engagement-based feed before making chronological feeds an easy option again. On Tuesday, Twitter said it's testing a feature that allows users to swipe more easily between the two modes. A company spokesman wouldn't say which option is more popular, but some of the researchers said they suspect users prefer the engagement-based default.

'Missing piece'

The limits on the benefit you can get from a chronological feed is one reason that social media experts, Haugen and some lawmakers have called for additional transparency as to how certain kinds of content performs on Facebook and also how it affects users.

The company does disclose aggregate figures about takedowns of some harmful content and bot networks. And users can see for themselves how many reactions, shares and comments a particular post gets within their network. What's missing, researchers say, is information about what kinds of misinformation persuade those who read it instead of just prompting outrage, what content or recommendations lead users to extreme groups, which low-quality accounts amplify the most problematic content and how widely most content gets seen (despite Facebook's attempt to share this information about top posts).

For instance, Renee DiResta, technical research manager at the Stanford Internet Observatory, said that when she helped the Senate Intelligence Committee prepare its report into Russia's attempts to sway the 2016 election, she could see how many comments a post got when it attempted to suppress votes by Black people.

She said, "What I had no visibility into was, were those people saying, 'Right on, we're not going to vote'? Or were they saying, 'Screw off, of course we're going to vote'? And that is such a key missing piece of the puzzle."

Even transparency is a fraught topic. There's ongoing debate about what kinds of information outsiders should see, if researchers or the media should get access to more restricted information, and how to protect users' privacy.

Over the summer, for instance, Facebook suspended the accounts of New York University researchers who had been studying disinformation and political ads on Facebook. The move prompted accusations that the company was trying to squash unflattering conclusions, while Facebook said its massive settlement with the U.S. Federal Trade Commission for privacy violations required the decision. The FTC eventually weighed in too, blasting the rationale and siding with the researchers.

Beyond Facebook

Facebook is touting yet another set of changes as it deals with the latest firestorm, although the latest updates offer little to those who want to see more transparency or feeds moving away from engaging-but-iffy content. The company's vice president of global affairs, Nick Clegg, confirmed over the weekend that the company will put in place optional controls for parents, try to "nudge" teens away from harmful content they keep returning to and prompt young users to take breaks.

Clegg also said that his company's algorithms "should be held to account, if necessary by regulation, so that people can match what our systems say they're supposed to do from what actually happens." He reiterated that the company backs certain changes to Section 230, which shields websites from legal liability over user posts.

Clegg's list does echo some of the other calls by Haugen. She suggested the creation of a new regulatory agency to police digital businesses, as well as changes to Sec. 230 that would make social media companies take more legal responsibility for content they boost algorithmically.

The effects of those changes would go far beyond Facebook, although lawmakers and Facebook critics might not mind. Services such as YouTube, TikTok and Reddit also focus primarily on engagement, even though it sometimes results in extreme politics or disinformation. Democratic Sen. Richard Blumenthal, who led Haugen's hearing, called for some of these companies to appear for future hearings.

Experts said that similar technologies, and incentives, drive content across social media services because the companies ultimately are delivering what users want — and no amount of changing an algorithm can address the human side of the problem.

"Even if we're making it harder to find things, there is still demand," DiResta said.

A MESSAGE FROM FACEBOOK

www.protocol.com

Facebook's industry-leading investments are stopping bad actors.

We've invested $13 billion in teams and technology over the last 5 years to enhance safety. It's working: In just the past few months, we took down 1.7 billion fake accounts to stop bad actors from doing harm. But there's more to do.

LEARN MORE

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins