In January of this year, an Instagram account dedicated to British music posted a 21-second clip of a music video by a U.K. drill rapper named Chinx (OS). Within two days, Instagram took it down.
The lyrics to the song, Secrets Not Safe, make reference to a real-world gang shooting, and Instagram removed the video under its policy against inciting violence. The original poster appealed the decision, and the clip was restored, but eight days later, it was removed again.
It’s the kind of thing that happens millions of times a year on Instagram, not to mention on the rest of the web. But what makes this particular post notable is how it got on Instagram’s radar in the first place — not by way of automated detection or user reports, but via a referral from the police.
In recent years, the London Metropolitan Police and other so-called Internet Referral Units have pummeled platforms including Facebook, Instagram, and, most notably, YouTube with notifications about content that supposedly violates those companies’ terms of service but that isn’t necessarily illegal. The Met’s IRU has placed a particular emphasis on music videos. Last year alone, the unit reportedly recommended YouTube take down 510 music videos, and YouTube complied nearly 97% of the time.
Critics have argued the IRU operates in a dangerous legal no man’s land, where law enforcement agencies use platforms’ own terms to circumvent the judicial system in an affront to free speech. Even so, the prevalence of these units has only grown, with similar divisions popping up in Israel and across Europe. Even where formal IRUs don’t exist, some platforms, including Facebook, have dedicated channels for government agencies like the U.S. Department of Homeland Security to flag content.
But the Chinx (OS) video stands to be a turning point in this relationship between platforms and police. Earlier this year, Meta referred the case to its Oversight Board, which will soon decide the fate of the post in question. The board’s accompanying recommendations could also help answer a much bigger question facing Meta and other tech giants: How can they most effectively push back against growing pressure from law enforcement without sacrificing public safety?
“What the Oversight Board does will have consequences not just for Facebook, but for governments,” said Daphne Keller, director of the Program on Platform Regulation at Stanford's Cyber Policy Center.
The rise of IRUs is a relatively recent phenomenon, with the Met’s so-called Operation Domain initiative, which focuses on “gang-related content,” having launched in 2015. That same year, following the Charlie Hebdo attacks in France, Europol stood up its own IRU division.
But the biggest test of what IRUs could get away with — and how tech platforms were enabling them— came in a 2019 case out of Israel. In that case, two human rights groups asked the Israeli Supreme Court to shut down the country’s so-called Cyber Unit, arguing that this "alternative enforcement" mechanism violated people’s constitutional rights. The court ultimately rejected the petition last year, in part because of Facebook’s failure to tell users it was removing posts in response to Cyber Unit referrals. Without that information, the plaintiffs couldn’t prove the Cyber Unit was responsible for alleged censorship. Besides, the court reasoned, Facebook voluntarily removed the posts under its own terms.
To civil liberties experts, the case illustrated the role tech companies’ decisions play in shielding IRUs from accountability. ”If law enforcement can hide beyond this veil of, ‘It’s just company action,’ they can do things, including systematically target dissent, while completely severing the ability for people to hold them accountable in court,” said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology, which is funded in part by Meta and the Chan Zuckerberg Initiative.
What the Oversight Board does will have consequences not just for Facebook, but for governments.
The Chinx (OS) case presents another test, and a chance for Meta to do things differently. Though the board didn’t specify which police agency requested the removal, according to its summary, the U.K. police warned Meta that the video in question “could contribute to a risk of offline harm.” The company appeared to agree and removed the video not once, but twice, after it was restored on appeal. But the decision clearly didn’t sit well with Meta. It referred the case to the board because, the company wrote in a blog post, “we found it significant and difficult because it creates tension between our values of voice and safety.”
The board has since asked for comments on the cultural significance of drill music in the U.K., and also on how social media platforms in general should handle law enforcement requests about lawful speech.
The case has drawn attention from leading civil liberties groups and internet governance experts, including CDT and the ACLU, which have both submitted comments to the board urging it to recommend stronger safeguards against the growing creep of IRUs. “Government-initiated removals — especially those that rely entirely on private content policies to take down lawful content — are a danger to free expression,” reads one comment by the ACLU and Keller.
There is, of course, good reason for the government and tech platforms to communicate. Law enforcement and government agencies often have better insights into emerging threats than platforms do, and platforms increasingly rely on those agencies for guidance.
The Met, for its part, has said it “works only to identify and remove content which incites or encourages violence” and that it “does not seek to suppress freedom of expression through any kind of music.” The agency also touted the effectiveness of the program in comments to the U.K.’s Information Commissioner’s Office, writing, “The project to date has brought to light threats and risk that would otherwise not have been identified through other policing methods.”
But these systems are also ripe for abuse, Llansó said. For one thing, companies may not always feel empowered to reject law enforcement referrals. “There can be a sense within a company that it’s better to be seen as a constructive and collaborative player, rather than one that’s always rejecting requests,” she said.
The fact that these requests are happening out of public view also prevents users from understanding how government agencies are seeking to censor them, and provides no recourse for them to challenge it. “From a company-centric perspective there’s potentially a lot of benefit to having people with expertise, including law enforcement, know about material you want to take down from your service,” said Llansó. “From a user-centric perspective it’s an entirely different story.”
In the U.K., where the Met’s IRU has focused on drill music specifically, these referrals may also disproportionately target Black communities engaged in entirely legal speech. “It's so subjective,” said Paige Collings, a senior speech and privacy activist at EFF, who recently wrote about the fraught relationship between the London Police and YouTube. “It's really racially oriented, racially driven.” Collings points to the widespread use of rap music in court as evidence of a “much wider structural issue” of police attempting to use songs as evidence. “It's not a testimony or evidence of crimes,” Collings said. “[Songs] are artistic expressions.”
Both CDT and the ACLU are calling on the board to urge Meta to notify users when their content is removed in response to a law enforcement request, and to publish detailed reports about these takedowns. Collings also believes platforms should publish examples of the content that’s being removed and list the formal and informal relationships it has with law enforcement units.
The ACLU and Keller also recommended that Meta be more discerning about which law enforcement agencies it trusts and refuse fast-track reporting channels to IRUs that make bad faith or inaccurate referrals. The Internet Archive has, in the past, called out IRUs for making faulty referrals, including the French IRU, which the Internet Archive said improperly flagged over 500 URLs as terrorist propaganda in 2019. “Governments should have, and maybe do have, an obligation to not be so sloppy about this,” Keller said.
While the board’s recommendations aren’t binding, the fact that Meta referred this case to the board at all suggests that the company is looking for help — or at least, backup — as it decides how to handle such requests in the future. And the volume of requests could soon increase. Under Europe’s Digital Services Act, platforms must have “trusted flagger” programs, like the one YouTube already runs, which allows law enforcement agencies and other public and private entities to refer content for removal.
For Meta and other companies operating in Europe, figuring out how to deal with this potential uptick in referrals without stifling users’ ability to speak freely is becoming increasingly urgent, Llansó said. The board’s recommendations stand to give Meta cover for changes it may have wanted to make anyway. “This case could be a way for [Meta] to get the fact that this is happening out on the record,” said Llansó. “If Facebook does want to roll out more transparency, they could use some political backing for that.”