Protocol | Policy

The UK finally got Big Tech to boost teens’ privacy

Google, Facebook and TikTok have announced new teen-protection measures in the U.S. ahead of obligations they'll face under the U.K.'s new digital design code for kids and teens.

The TikTok logo displayed on a smartphone screen

The U.K.'s new Age Appropriate Design Code is setting standards for the U.S.

Photo: Solen Feyissa/Unsplash

Social media platforms are suddenly all increasing privacy standards for teen users. Why now, after all these years? It looks like new U.K. privacy rules got the ball rolling.

TikTok on Thursday announced a series of measures to protect teen users, including limiting direct messaging for users under 16 by default, requiring those users actively to choose who can see their first video and limiting nighttime push alerts. The company's new policies follow similar recent moves by YouTube, Instagram and parent companies Google and Facebook to protect teen users.

In many cases, the services have said their decisions arose as they mulled the coming of the U.K.'s new Age Appropriate Design Code. The new standards, which includes 15 rules on issues such as data minimization and connected devices, goes into effect Sept. 2.

The AADC applies to services in the U.K. and outlines robust protections for young children that may be able to lessen as users age into their late teens. It has also prompted some U.S. lawmakers to lament the lack of online protections for teens in this country. The 1998 Children's Online Privacy Protection Act, for instance, establishes some rules for children age 12 and under, such as a parental consent requirement for data collection. Teens, though, are more or less on their own, even as social media sites are eager to lock in young users while they're still setting the latest trends.

Kids' advocates have long expressed concerns about a range of potential harms children can encounter online. These range from the likelihood that children come across violence and adult-only products such as alcohol, to the permanent collection of data from children who can't meaningfully or legally consent. That data, in turn, often powers marketing that may try to take advantage of young users' limited experience. Privacy advocates also have a long list of concerns about ads to users of all ages that, for instance, only offer educational opportunities to people of a certain race or that engage in other forms of discrimination regarding health care, housing or finance.

"Children and teens are a uniquely vulnerable population no matter where they live, and companies have an obligation to ensure that their online services put the welfare of young users first," Democratic Sen. Ed Markey and Reps. Kathy Castor and Lori Trahan wrote in June letters to the heads of Facebook, Google, TikTok and others. The lawmakers urged the companies to bring their U.K. protections for teens over to the U.S.

Most of the companies' responses in recent weeks stopped short of saying they'd import the protections wholesale to the U.S., but the sites did say they were considering what parts to implement worldwide.

"The privacy, safety, and wellbeing of young people on our platforms are essential and the Age Appropriate Design Code is one of the inputs that informs the expansive work we do every day to protect the safety and privacy of young people using our apps globally," Facebook said in its July 27 response.

The company's letter came out the same day that Facebook said it would limit the options for targeting ads to users under 18 on all its properties and that it would make private accounts the default for Instagram users under 16.

Although Google didn't tease details in its response to lawmakers, the company in August announced its own suite of changes. Those included defaulting to private upload on YouTube for those under 17, turning off autoplay on YouTube Kids and blocking "ad targeting based on the age, gender, or interests" of users under 18.

TikTok, which has been under pressure from lawmakers over its handling of data because it is so popular with teens, not only made recent changes but also highlighted prior actions. It said in January, for instance, that it would make accounts for users under 16 private by default.

Even Apple, which did not receive a letter from lawmakers about the U.K. code, recently announced child protection measures. Those changes focused on preventing sexual exploitation rather than data collection and content, but some policy experts still saw Apple's moves, which also relaunched debates about encryption, as a response to worldwide efforts to rein in controversial content and protect children.

Markey, who helped author the current U.S. rules for those under age 13, called the social media companies' recent changes "steps in the right direction to increase privacy protections for children and teens online," but said in a statement he wanted more.

"These voluntary policy changes are no substitute for legally enforceable rules that force websites and apps to stop putting corporate profits ahead of children's privacy," Markey said. To create those legally enforceable rules, Markey urged Congress to move forward his bipartisan bill that would create rules for companies' treatment of some teens online, as well as a ban on targeted advertising to children. A House proposal from Castor, meanwhile, would bring several proposed protections, including a ban on targeted ads, up to age 17.

Pressures on the companies to boost teen protections aren't just coming from London and Capitol Hill. Back in 2019 in the U.S., the Federal Trade Commission settled with YouTube and TikTok for alleged violations of the kids' privacy law. And the tech community largely views the current FTC as the most aggressive since the 1970s.

Since the European Union adopted the 2018 General Data Protection Regulation, some countries in the bloc also require parental consent for user data collection up to age 16, although individual nations can lower the threshold as long as they don't go below 13. The U.K. designed the AADC rules to elaborate on GDPR.

Jeff Chester, executive director of the Center for Digital Democracy, is a longtime advocate for children's online privacy. These and other factors, he said, are "forcing the industry to pay attention to the teens."

He added that the companies are trying to stave off more aggressive actions, whether by lawmakers or agencies, in an area that has long been bipartisan.

"This is a deliberate ploy to head off stronger regulation," he observed.

Protocol | Policy

Why Twitch’s 'hate raid' lawsuit isn’t just about Twitch

When is it OK for tech companies to unmask their anonymous users? And when should a violation of terms of service get someone sued?

The case Twitch is bringing against two hate raiders is hardly black and white.

Photo: Caspar Camille Rubin/Unsplash

It isn't hard to figure out who the bad guys are in Twitch's latest lawsuit against two of its users. On one side are two anonymous "hate raiders" who have been allegedly bombarding the gaming platform with abhorrent attacks on Black and LGBTQ+ users, using armies of bots to do it. On the other side is Twitch, a company that, for all the lumps it's taken for ignoring harassment on its platform, is finally standing up to protect its users against persistent violators whom it's been unable to stop any other way.

But the case Twitch is bringing against these hate raiders is hardly black and white. For starters, the plaintiff here isn't an aggrieved user suing another user for defamation on the platform. The plaintiff is the platform itself. Complicating matters more is the fact that, according to a spokesperson, at least part of Twitch's goal in the case is to "shed light on the identity of the individuals behind these attacks," raising complicated questions about when tech companies should be able to use the courts to unmask their own anonymous users and, just as critically, when they should be able to actually sue them for violating their speech policies.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

While it's easy to get lost in the operational and technical side of a transaction, it's important to remember the third component of a payment. That is, the human behind the screen.

Over the last two years, many retailers have seen the benefit of investing in new, flexible payments. Ones that reflect the changing lifestyles of younger spenders, who are increasingly holding onto their cash — despite reports to the contrary. This means it's more important than ever for merchants to take note of the latest payment innovations so they can tap into the savings of the COVID-19 generation.

Keep Reading Show less
Antoine Nougue,Checkout.com

Antoine Nougue is Head of Europe at Checkout.com. He works with ambitious enterprise businesses to help them scale and grow their operations through payment processing services. He is responsible for leading the European sales, customer success, engineering & implementation teams and is based out of London, U.K.

Protocol | Fintech

When COVID rocked the insurance market, this startup saw opportunity

Ethos has outraised and outmarketed the competition in selling life insurance directly online — but there's still an $887 billion industry to transform.

Life insurance has been slow to change.

Image: courtneyk/Getty Images

Peter Colis cited a striking statistic that he said led him to launch a life insurance startup: One in twenty children will lose a parent before they turn 15.

"No one ever thinks that will happen to them, but that's the statistics," the co-CEO and co-founder of Ethos told Protocol. "If it's a breadwinning parent, the majority of those families will go bankrupt immediately, within three months. Life insurance elegantly solves this problem."

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Protocol | Workplace

Remote work is here to stay. Here are the cybersecurity risks.

Phishing and ransomware are on the rise. Is your remote workforce prepared?

Before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

Photo: Stefan Wermuth/Bloomberg via Getty Images

The delta variant continues to dash or delay return-to-work plans, but before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

So far in 2021, CrowdStrike has already observed over 1,400 "big game hunting" ransomware incidents and $180 million in ransom demands averaging over $5 million each. That's due in part to the "expanded attack surface that work-from-home creates," according to CTO Michael Sentonas.

Keep Reading Show less
Michelle Ma
Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.
Protocol | Enterprise

How GitHub COO Erica Brescia runs the coding gold mines

GitHub sits at the center of the world's software-development activity, which makes the Microsoft-owned code repository a major target for hackers and a trend-setter in open source software.

GitHub COO Erica Brescia

Photo: GitHub

An astonishing amount of the code that runs the world's software spends at least part of its life in GitHub. COO Erica Brescia is responsible for making sure that's not a disaster in the making.

Brescia joined GitHub after selling Bitnami, the open-source software deployment tool she co-founded, to VMware in 2019. She's responsible for all operational aspects of GitHub, which was acquired by Microsoft in 2018 for $7.5 billion in one of its largest deals to date.

Keep Reading Show less
Tom Krazit

Tom Krazit ( @tomkrazit) is Protocol's enterprise editor, covering cloud computing and enterprise technology out of the Pacific Northwest. He has written and edited stories about the technology industry for almost two decades for publications such as IDG, CNET, paidContent, and GeekWire, and served as executive editor of Gigaom and Structure.

Latest Stories