Politics

Facebook is preparing for election week chaos

The company is planning for a drawn-out process with plenty of people declaring premature victory.

Mark Zuckerberg looking sad

On Wednesday, Facebook explained how it plans to approach what is bound to be a historically chaotic election in the United States.

Photo: Drew Angerer/Getty Images

Facebook has often been accused of failing to anticipate problems of grave international importance until it's too late. When it comes to election night 2020, it's at least trying not to repeat that mistake.

On Wednesday, the company explained how it plans to approach what is bound to be a historically chaotic election in the United States. A deluge of mail-in ballots are expected to create delays in announcing a winner and the president of the United States is already indicating he may not accept the results anyway. Facebook's plans to address this include adding notifications at the top of users' feeds and imposing a quiet period on political ads after the election.

"This is a very unique election," Guy Rosen, Facebook's vice president of integrity, said on a call with reporters. "It's really important for us to make sure that we're putting all eyes on this."

To surface reliable information around the election, Facebook will rely heavily on its Voting Information Center, a relatively new section of the platform that includes information about the election results. When polls close, the company said it will run notifications at the top of both Instagram and Facebook, directing people to the center. If any presidential candidate declares victory before "major media outlets" do, however, Facebook plans to label their posts, specifying that votes are still being counted.

If the opposite happens, and media outlets declare a victory, but the outcome is contested by one of the candidates, the company plans to run notifications at the top of Facebook and Instagram with the winning candidate's name and a link to the Voting Information Center. That label will also be applied to posts from presidential candidates.

To surface reliable information around the election, Facebook will rely heavily on its Voting Information Center.Image: Facebook

In addition to these warnings, the company announced it will ban all ads related to social issues, politics and elections after polls close on Nov. 3. Facebook anticipates that the ban could last for a week, but is subject to change. According to Rosen, the goal of that change is to "reduce opportunities for any confusion or abuse."

Previously, Facebook said it would prohibit any new political or issue ads in the week leading up to the election, a tweak some viewed as too minor to matter. The wholesale ban after Election Day will pack more of a punch, but it could also end up privileging candidates who have a bigger organic following, like President Trump does.

Though it wont to admit it, it's clear that much of Facebook's planning has to do with concerns over President Trump's behavior both before and after the election. During last week's debate, the president called for the white supremacist group The Proud Boys to "stand by," which the group took as a call to arms. Now, Facebook is announcing that in addition to its existing policies prohibiting calls for bringing weapons to polling places or actively interfering in voting, it's also going to remove posts that call for poll watching using militarized language, or in a way that would intimidate or attempt to display power over election officials and voters.

"Recently we've seen speech that has been more implicit in a number of different areas with our policies," Monika Bickert, Facebook's developer of policy enforcement, said in response to a question about President Trump's comments. "The civil rights auditors and the civil rights community members that we talk to on a regular basis have really helped us track some of these trends. It's a very adversarial space, of course, and we anticipate that, as we have updated these lines, those who are seeking to get around them will try to use new language. So that's something we try to stay on top of."

Bickert said this policy will not be retroactive, which means it won't be applied to a recent post by the Trump campaign, in which Donald Trump Jr. calls for an "ARMY FOR TRUMP's election security operation!" If the campaign were to post a similar message going forward, however, Bickert said it would be removed.

Facebook's awareness of the role it plays in elections has evolved dramatically since the 2016 race, when the company's primary goal was to sell as many political ads as it could. Since then, it's curbed political advertising somewhat by forcing would-be advertisers to go through a verification process and created a library of ads that the company says more than 2 million people visit every month. Now, rather than touting the amount of money it's made from political ads, Facebook is taking a victory lap for how many ads it's blocked — 2.2 million — for failing to go through the verification process.

The company now meets often with its counterparts in tech, voting rights experts and government officials to game out potential threats to elections. It monitors viral posts that risk violating its policies, even if they haven't been reported by a user or flagged by its automated systems. And it regularly finds and removes the kind of coordinated campaigns that Russian trolls used to interfere with the 2016 election — an issue that was scarcely on the company's radar in 2016. This, the company's head of cybersecurity policy Nathaniel Gleicher said on the call, is "one of the biggest differences between the 2016 election and today."

"In 2016, Russian actors' deception campaigns were exposed in the months after the election," Gleicher said. "Today they're getting caught and taken down months, and in some cases, more than a year in advance." So far, Facebook has removed more than 100 such campaigns.

Facebook has also undertaken a voter registration drive that the company says has helped some 2.5 million people register to vote so far this year.

But in so many other ways, critics say the company is still doing too little, too late. The labels it applies to posts questioning the legitimacy of voting and mail-in ballots have been panned for being vague and at times, even confusing. (Recently, the company added a new label to one of President Trump's posts about mail-in voting, more directly contradicting his claims.) It's refused to fact-check politicians' false statements, except on a narrow set of issues. And it's slow-walked its ban on dangerous communities like QAnon, a conspiracy theory group that has gained traction on social media and is now working its way into mainstream politics. In August, the company announced it would ban violent content associated with the group. But it wasn't until just this week that Facebook announced it would ban QAnon content altogether.

Even as it addresses issues it's seen pop up in the past, it's facing brand new challenges that may prove just as complex to solve. Gleicher, for one, warned that while the company has gotten better at detecting coordinated inauthentic activity, those same bad actors are now creating their own media outlets, hiring contributors and attempting to feed their stories to "unwitting news organizations," a technique he referred to as "perception hacking."

"As it gets harder and harder to run large scale social media operation campaigns because they're getting caught, they're trying instead to play on our fears," Gleicher said. "Why run a large campaign that will get you caught when you can try and trick people into thinking such a campaign is happening?"

What seems clear is that no matter how hard Facebook tries to prepare and correct the record around the election, this year, the company will be up against a sitting president with a giant microphone and a propensity for spouting misinformation. President Trump and his fellow conservatives have already spent years convincing their supporters that Big Tech is biased against them. Slapping a warning label on his potential declaration of victory hardly seems likely to convince them otherwise.

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Workplace

Indeed is hiring 4,000 workers despite industry layoffs

Indeed’s new CPO, Priscilla Koranteng, spoke to Protocol about her first 100 days in the role and the changing nature of HR.

"[Y]ou are serving the people. And everything that's happening around us in the world is … impacting their professional lives."

Image: Protocol

Priscilla Koranteng's plans are ambitious. Koranteng, who was appointed chief people officer of Indeed in June, has already enhanced the company’s abortion travel policies and reinforced its goal to hire 4,000 people in 2022.

She’s joined the HR tech company in a time when many other tech companies are enacting layoffs and cutbacks, but said she sees this precarious time as an opportunity for growth companies to really get ahead. Koranteng, who comes from an HR and diversity VP role at Kellogg, is working on embedding her hybrid set of expertise in her new role at Indeed.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Climate

New Jersey could become an ocean energy hub

A first-in-the-nation bill would support wave and tidal energy as a way to meet the Garden State's climate goals.

Technological challenges mean wave and tidal power remain generally more expensive than their other renewable counterparts. But government support could help spur more innovation that brings down cost.

Photo: Jeremy Bishop via Unsplash

Move over, solar and wind. There’s a new kid on the renewable energy block: waves and tides.

Harnessing the ocean’s power is still in its early stages, but the industry is poised for a big legislative boost, with the potential for real investment down the line.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins