China imposes AI content labeling rules will be in effect from September to combat spread of misinformation.
Chinese authorities introduced new guidelines on Friday mandating that all AI-generated content circulating online must be clearly labeled. The initiative aims to curb the misuse of artificial intelligence and prevent the spread of misinformation.
The regulations, issued collaboratively by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, are set to take effect on September 1.
A Cyberspace Administration spokesperson stated that the objective is to stop the improper use of AI generative technologies and prevent the dissemination of false information.
The guidelines require that AI-generated content, including text, images, audio, video, and virtual scenes, be labeled both visibly and invisibly. For deep synthesis content that may mislead the public, prominent labels must be placed in an appropriate position to ensure clear identification.
Visible labels, incorporated within the content or user interface, must be presented in a format easily recognizable by users, such as text, sound, or graphics.
Additionally, the regulations mandate that metadata files contain implicit labels specifying content attributes, service provider details, and content identification numbers. These metadata records store descriptive information about the content’s origin and purpose.
Service providers distributing online content must ensure that metadata files include hidden AI-generated content (AIGC) markers. Users must also declare AI-generated or synthesized content, and prominent indicators should be placed around the content to inform audiences.
AI generative technology has been used to create highly realistic yet misleading content for publicity or commercial purposes. For example, a recent AI-generated news report falsely claimed that one in every 20 individuals born in the 1980s had died, causing public outrage before being debunked.
This technology has also been exploited to clone the voices and faces of celebrities, leading to deepfake content, which infringes on personal rights and may be legally punishable.
Earlier this month, during the annual sessions of the 14th National People’s Congress (NPC) and the 14th Chinese People’s Political Consultative Conference (CPPCC), prominent figures such as Xiaomi Corp founder Lei Jun and actor Jin Dong advocated for stronger legal regulations on AI-generated content.
Jin Dong highlighted that many of his fans had been misled by deepfake videos featuring his likeness, calling it a malicious practice and urging for stricter regulations.
Related Posts
China Proposes Group to Promote Cooperation on Artificial Intelligence