Enterprise

AI needs massive data sets to work. Meta is testing a way to do more with less.

Despite the constant deluge of content flowing into Facebook and Instagram, Meta has struggled to get enough data to train AI to spot harmful content, so it’s banking on an emerging approach.

A visualization of the few-shot learning AI process

Meta plans to announce that few-shot learning shows promise in its constant battle to weed out disinformation or other content that violates its policies on Facebook and Instagram.

Image: Meta

After a terrorist attack on a mosque in Christchurch, New Zealand was livestreamed on Facebook in 2019, its parent company, now called Meta, outfitted London police officers with body cams while they conducted terrorism training. At the time, Meta said there wasn’t enough video data to train its artificial intelligence systems to detect and remove violent content, so it hoped the body cam project would produce more of that scarce AI training data.

A year prior to that horrific incident, the company acknowledged that it failed to keep up with inflammatory posts from extremist groups in Myanmar. Again, it said the problem was a lack of data — there wasn’t enough content in Burmese to train algorithmic moderation systems to spot more of it.

They weren’t wrong: Despite the constant deluge of content flowing into Facebook and Instagram, traditional AI approaches used by Meta and other companies need enough examples of the bad stuff to recognize it when it shows up again. A dearth of training data can plague AI systems that need large amounts of information labeled by humans in order to learn.

Enter few-shot learning, a concept that researchers across the globe have experimented with in recent years. Few-shot learning models can be trained from generic data supplemented with just a “few” pieces of labeled content.

Now, Meta plans to announce Wednesday that few-shot learning shows promise in its constant battle to weed out disinformation or other content that violates its policies on Facebook and Instagram, particularly when there isn’t enough AI training data, such as in the case of emerging subject areas or breaking news events.

Following early tests on Facebook and Instagram, the company told Protocol that the technique has helped reduce the prevalence of content such as hate speech. So far, it has only used the approach to tackle a few content areas such as “misleading or sensationalized information that likely discourages COVID-19 vaccinations, and hostile speech like bullying and harassment and violence and incitement,” said a Meta AI spokesperson. For instance, the company tested few-shot learning to identify content that promoted the debunked notion that COVID-19 vaccines change people’s DNA.

Meta said the few-shot process shortens the amount of time it takes to train up an AI system from several months to a few weeks. “Since it scales quickly, the time from policy framing to enforcement would shorten by orders of magnitude,” wrote Meta in a blog post published Wednesday. In addition to text and image content, the company said the technique also works for video content by consuming audio transcript, text on video and video embedding.

The company aims to burnish its image amid endless scrutiny by lawmakers and everyday people of its handling of abusive and false content on Facebook and Instagram. Later on Wednesday Adam Mosseri, the head of Instagram, will answer inquiries from Senate Commerce Consumer Protection subcommittee members about how its algorithmic systems fuel content that has negative effects on kids.

Google, Baidu and others research few-shot approaches

Historically, artificial intelligence and machine learning algorithms have needed vast amounts of data to train them. Feed an algorithm lots of images of bananas or AK-47s labeled as such, and it will learn to recognize them — or at least that’s the goal.

Researchers from Open AI, Google, Baidu and academic institutions across the globe have studied few-shot learning in recent years to circumvent the need for massive datasets, and not just for removing harmful social media content. Researchers have suggested few-shot learning can be used to help discover molecular properties for drug development when data is restricted by privacy rules, or to uncover tweets related to natural disasters in the hopes of disseminating important safety information.

“Because large, labeled datasets are often unavailable for tasks of interest, solving this problem would enable, for example, quick customization of models to individual user’s needs, democratizing the use of machine learning,” wrote Google AI researchers in 2020 in a company blog post about few-shot learning.

Meta has been working on this AI problem for some time. It revealed some detail four years ago about how its AI tried to detect harmful content associated with terrorism, for example.

“When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video,” said the company at the time. To automatically remove text-based content, the company said, “we’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.”

A track record of language failures

That was then. In November, Meta pointed to a series of technical milestones that led researchers to what it called “breakthrough” exploration of applying few-shot learning to content moderation. In a blog post last month, the company showed a timeline of advancements including a process called XLM-Rthat trains a model in one language and then applies it to content in other languages without the need for additional training data.

The company seems to be confident that emerging AI techniques like few-shot learning and XLM-R will help to improve how it patrols content in languages where it’s faltered before, such as “low-resource languages” like Burmese.

Yet the recently leaked Facebook Papers revealed Meta’s struggles to remove harmful content in places where it hasn’t hired enough human moderators or built well-trained moderation algorithms. Meta itself has admitted publicly that its automated moderation technologies have not worked well to weed out unwanted Burmese content in Myanmar, for example. But the exposed documents also showed the company did not develop algorithms to detect hate speech in Hindi and Bengali, both among the top-ten most-spoken languages in the world.

When asked by Protocol why it believes few-shot learning works in so many languages despite past failures, the Meta spokesperson said the system was trained on more than 100 languages and incorporates techniques like XLM-R. “The nuance and semantics of language is one of the reasons why we built this technology — to be able to more quickly address content in multiple languages,” said the spokesperson. “As these underlying language and text encoders improve, Meta AI FSL will also bring the improvements to additional languages too.”

Still, a lot of testing will be required to know if these emerging approaches can work at scale.

“We are early in the use of this technology,” said the Meta spokesperson. “As we continue to mature the tech and test it across various enforcement mechanisms and problems the goal is to further increase its use and continued accuracy.”

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins