China’s Cyberspace Administration released a draft rule that would place new oversight obligations on providers of deepfake technology. The regulation would cover "deep synthesis internet information services,” including any technology that generates text, images, audio, videos or virtual scenes based on deep learning. Popular AI tools like GPT-3 would be covered under the rule.
Under the regulation, deepfake providers would be required to verify the identity of each user and actively screen the results of their services for potential violations. Providers would also have to “insert marks that don’t interfere with user experience” to make all deepfake content identifiable and traceable. Any content created to mimic real human images or sounds would be held to higher standards.
Deepfakes are controversial globally for how they can be used to spread disinformation or create revenge porn. To regulate a technology that hasn’t been clearly defined or understood, China is taking the approach of holding the technology providers accountable for consequences that the state can’t predict yet. “One interesting piece of this is that pressure is placed on multiple actors to comply — app stores, developers, platforms all have obligations here,” Kendra Schaefer, partner at research organization Trivium China, wrote.
The deepfake rules came after a batch of new regulations announced in January that complement Beijing’s toolbox for taking control of cutting-edge technologies. This particular one seems more preventative than reactive: Except for a few face-changing apps that briefly went viral, China is not known for rampant deepfake use. Some startups have attempted to apply the technology in more harmless ways, like generating product catalogs.
The regulation is seeking public comment until Feb. 28, and details are subject to change.