Bulletins

Apple delayed its child-protection features after they caused huge controversy

Now Apple says it will “collect input and make improvements.”

Apple logo
Photo: Justin Sullivan/Getty Images

After weeks of controversy, Apple announced that it will delay the rollout of several new child-protection features, including the new on-device scanning tech it designed to detect child sexual abuse material.


“Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of Child Sexual Abuse Material," Apple said in a statement posted above its original press release announcing the features. "Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features."

Apple's announcement set off a huge amount of discussion in the industry, with privacy advocates saying the moves amount to Apple giving itself (and by extension others) the tools to snoop on users' devices. Apple maintained it was only focused on child protection and a few specific use cases, and was unusually public and forthcoming in defending the policies, but still failed to persuade most of its critics.

The primary question was always one of unintended consequences. Even if Apple was building a specific tool for a specific and positive purpose, the possibilities for expansion and abuse seemed to many people to be not worth the risk. "Apple plans to erase the boundary dividing which devices work for you, and which devices work for them," Edward Snowden wrote. Apple will have to work out how to deal with those issues before it tries to launch these features again.

Protocol | Enterprise

Startups are pouncing as SaaS giants struggle in the intelligence race

Companies like Salesforce and Workday spent the last two decades building walled gardens around their systems. Now, it's a mad dash to make those ecosystems more open.

Companies want to predict the future, and "systems of intelligence" might be their best bet.

Image: Yuichiro Chino / Getty Images

Take a look at any software vendor's marketing materials and you're sure to see some variation of the word "intelligence" splattered everywhere.

It's part of a tectonic shift happening within enterprise technology. Companies spent the last several years moving their systems to the internet and, along the way, rapidly adopting new applications.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.


Keep Reading Show less
Nasdaq
A technology company reimagining global capital markets and economies.
Protocol | Workplace

The hottest new perk in tech: A week off for burnout recovery

In an industry where long hours are a "badge of honor," a week of rest may be the best way to retain talent.

Tech companies are giving their employees a week to rest and recover from burnout.

Photo: Kinga Cichewicz/Unsplash

In early May, the founder of Lessonly, a company that makes training software, sent out a companywide email issuing a mandate to all employees. But it wasn't the sort of mandate employees around the world have been receiving related to vaccines and masks. This mandate required that every worker take an entire week off in July.

The announcement took Lessonly's staff by surprise. "We had employees reach out and share that they were emotional, just thankful that they had the opportunity to do this," said Megan Jarvis, who leads the company's talent team and worked on planning the week off.

Keep Reading Show less
Aisha Counts
Aisha J. Counts is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, the gig economy and startups. She is a graduate of the University of Southern California, where she studied business and philosophy.
Power

Chip costs are rising. How will that affect gadget prices?

The global chip shortage is causing component costs to go up, so hardware makers are finding new ways to keep their prices low.

Chips are getting more expensive, but most consumer electronics companies have so far resisted price increases.

Photo: Chris Hondros/Getty Images

How do you get people to pay more for your products while avoiding sticker shock? That's a question consumer electronics companies are grappling with as worldwide chip shortages and component cost increases are squeezing their bottom lines.

One way to do it: Make more expensive and higher-margin products seem like a good deal to customers.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Protocol | Policy

Laws want humans to check biased AI. Research shows they can’t.

Policymakers want people to oversee — and override — biased AI. But research suggests there's no evidence to prove humans are up to the task.

The recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

Photo: Jackal Pan/Getty Images

There was a time, not long ago, when a certain brand of technocrat could argue with a straight face that algorithms are less biased decision-makers than human beings — and not be laughed out of the room. That time has come and gone, as the perils of AI bias have entered mainstream awareness.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own. The problem is: These laws almost never stop to ask whether human beings are actually up to the job.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories