Shortly after November's presidential election, a story appeared on the website of far-right personality Charlie Kirk, claiming that 10,000 dead people had returned mail-in ballots in Michigan. But after publishing, a correction appeared at the top of the story, completely debunking the misleading headline, which remains, months later, unchanged.
"We are not aware of a single confirmed case showing that a ballot was actually cast on behalf of a deceased individual," the correction, which quoted Michigan election officials, read.
The note was a clear bait-and-switch on a story that boldly pushed a false conspiracy theory about the election. But just as striking as the editor's note is what sits at the top of this story, and every story on Kirk's site: a Google ad.
Since the 2016 election, endless attention has been paid to the way election misinformation can spread through targeted ads on social media platforms like Facebook and YouTube. But an equally insidious and less-discussed problem is how programmatic advertising, a field dominated by Google, has become the lifeblood of misinformation sites. With or without social platforms, these ads allow misinformation sites to exist and even thrive all on their own by providing a source of revenue, and companies like Google have a shoddy record of policing them.
In a new report released Thursday by the news rating company NewsGuard, researchers found ads for more than 1,600 mainstream brands, from Disney to Procter & Gamble, running on 160 sites (including Kirk's) that have openly pushed election conspiracies. Google was responsible for ads on a whopping 80 percent of those sites. Another ad exchange, The Trade Desk, was running ads on roughly half.
Among the examples NewsGuard lists: ads for Harvard University appearing on One America News Network's website, ads for AARP appearing on sites like The Gateway Pundit and ZeroHedge, and Walmart ads appearing on NOQ Report, a site that recently argued Satan uses Democrats to do his bidding, including stealing the election.
In some cases, the ads create discordant messaging between publisher and advertiser. While reporting this story, Protocol found ads for Planned Parenthood on Kirk's site, despite Kirk's frequent calls for Planned Parenthood to be defunded. Thanks to these ads, Planned Parenthood is effectively funding Kirk.
A Planned Parenthood fundraising ad appears on a story containing election misinformation at the top of Charlie Kirk's website.Screenshot: Protocol
Google has policies forbidding publishers who post demonstrably false election misinformation and other types of content from placing Google ads on their websites. "Claims that voter fraud was widespread or that the election was stolen are all prohibited by our policies. When we find content that violates our policies we remove its ability to monetize," a Google spokesperson said.
The company demonetizes individual stories first, reserving site-wide demonetization for egregious, persistent offenders (The company recently demonetized a far-right militia site following the Capitol riot). In 2019, the company removed ads from 21 million individual pages, the Google spokesperson said. The problem: Google serves billions of ads every day.
It's not that the company is unaware it's serving ads on the sites NewsGuard noted in its report. In August, a group of philanthropists wrote to Alphabet CEO Sundar Pichai after reports showed that ads for groups like the Red Cross and Save the Children were appearing alongside COVID-19 misinformation on TheGatewayPundit and elsewhere. The philanthropists urged Google to institute a new model that "does not put [advertisers] into unwanted and damaging associations that undermine their good works and values."
Since then, the company has taken action on individual Gateway Pundit articles, but five months later, Google has yet to fully demonetize the site despite repeated violations, enabling it to continue growing and spreading misinformation.
The power programmatic advertising has in sustaining these sites is particularly relevant now as social media giants begin to more forcefully crack down on accounts, including the president's, that regularly post dangerous conspiracy theories or incitements to violence. For many of these companies, last week's riot in the U.S. Capitol was a wake-up call, showing them the disastrous real-world consequences of allowing people to believe lies about the election being stolen. But chasing the accounts that peddle those lies off of Facebook, Twitter and YouTube is only part of the solution. As long as the people behind those accounts have a way to make money on their falsehoods, why would they ever stop?