The Cyberspace Administration of China deploys the special action of "Clearing the Air and Rectifying Chaos in AI Applications".

date
17:06 30/04/2026
avatar
GMT Eight
Recently, the Cyberspace Administration of China issued a notice to deploy a special four-month campaign nationwide to clean up and regulate the chaos in AI applications.
To standardize AI services and applications, promote the healthy and orderly development of the industry, and safeguard the legitimate rights and interests of citizens, the Cyberspace Administration of China recently issued a notice to deploy and carry out a four-month special action called "Clearance and Rectification of AI Application Chaos" nationwide. A relevant official from the Cyberspace Administration of China stated that this special action will be carried out in two stages. The first stage is the special governance action for "Clearance of Typical Violations in AI Application Services," focusing on rectifying issues such as failure to fulfill the obligation of registering large models as required, insufficient security audit capabilities, security of training data for large models, AI data poisoning, and inadequate implementation of labeling for generated synthetic content to strengthen the governance of AI technology at its source. The second stage is the special action to "Clear and Rectify AI Information Chaos," focusing on addressing issues such as the use of AI technology to generate "digital garbage," dissemination of false information, spreading of violent and vulgar content, impersonation of others, infringement of the rights of minors, and engagement in activities of internet water armies, resolutely clearing illegal and harmful information, and legally penalizing violators, MCN institutions, and website platforms. The first stage focuses on rectifying 7 prominent issues: firstly, failure to fulfill the obligation of registering large models as required, resulting in situations where registration is required but not completed according to the Interim Measures for the Management of AI Services using Generative Models. Secondly, inadequate security and filtering capabilities of AI platforms. Issues include the lack of security capabilities at the model's foundation, deviations in value orientation during the design, training, and deployment phases, inadequate correction capabilities, lack of security fences, insufficient audit filtering capabilities resulting in the generation of illegal and harmful information, or cloud and website links containing illegal and harmful information in generated content. Thirdly, security issues in training data for large models. Lack of strict scrutiny in data training, presence of illegal and harmful information in model training data, compliance issues in the source of the training data, unauthorized use of texts, images, audio, and video data in the model training process, insufficient coverage of training data, and lack of high-quality language corpus. Fourthly, AI data poisoning issues. Ways of implementing AI data poisoning include tampering with training data, forging authoritative data, and malicious marketing using GEO (Generative Search Engine Optimization) technology. Methods of promoting and teaching AI data poisoning or selling tutorials and tools on e-commerce platforms are implemented. In the process of generating responses from the model, there is a lack of cross-validation and risk prompt mechanisms for cited sources, and reference information links are not marked. Fifthly, inadequate implementation of labeling for generated synthetic content. Non-compliance with the "Methods for Labeling AI-generated Synthetic Content" and associated mandatory standard requirements, lack of labeling for generated synthetic content, non-standard specifications for labeling such as size, position, and transparency, failure to implement obligations for implicit identification and mutual recognition across platforms, inability to identify generated synthetic content effectively, teaching guides to remove AI labels, providing services for illegally removing labels, and more. Sixthly, misuse of AI technology for illegal activities. Actions include carrying out network attacks such as infiltration, malicious sample injection, and system destruction using AI technology, unauthorized provision of "face-swapping voice simulation," stealing others' images for live broadcasts, creative imagery, or providing financial advice, medical questions, and other professional services using digital virtual people. The use of intelligent body technology to steal user data or account keys, infringing on the privacy or legitimate rights of others. Seventhly, inadequate security management of open-source models. The lack of identity attestation and security management mechanisms in the open-source community, the absence of effective audit and emergency response mechanisms for data sets and model code uploaded to the community, and the failure to timely address high-risk data sets or open-source models. The second stage focuses on rectifying 7 prominent issues: firstly, using AI to distort classics and generate "digital garbage." Distorting and deconstructing excellent traditional culture, historical figures, or historical anecdotes, abusing AI technology to "magically modify" classical literature or classic films and TV shows, inserting vulgar and violent content, spoofing the spiritual connotations of classical works, and subverting important characters. Generating and releasing "digital garbage" content with chaotic logic, empty values, or tendencies of incorrect values and distorted cultural cognition. Secondly, creating and disseminating false information. Generating and synthesizing false information such as rumors in current affairs politics, public policy, social livelihood, international relations, and emergencies, or maliciously hyping unexpected events, fabricating reasons, details, developments, etc. Impersonating party and government institutions, central and local news media, publishing false announcements or fake news, generating and disseminating unverified or blatantly unscientific information in professional fields like healthcare, judiciary, finance, education, etc. Thirdly, impersonating others. Using AI "face-swapping" and "voice swapping" to impersonate public figures and spread false statements, deceive netizens, and even profit from it, or generate synthetic content attacking, defaming, or denigrating others. Generating synthetic images, intimate photos, and videos with public figures without permission. Improper use of AI to "resurrect the dead," abuse of deceased information. Fourthly, producing and distributing violent, vulgar, and harmful information. Producing and disseminating content containing violence, abuse, or harm, or bizarre and terrifying bizarre images, creating synthetic images and videos that emphasize specific parts of women's bodies, revealing clothing, provocative behavior, or low-quality edge-cutting images, Or novels, notes, etc., with sexual implications or provocative content. Fifthly, infringing the rights of minors. Generating synthetic content that harms or ridicules children, committing online bullying, generating synthetic content with minors involving pregnancy, fighting, etc. that does not match their age characteristics, propagating inappropriate values. Generating synthetic videos for children using animation characters or doll images that promote violence or heresy, or creating copyright animations that mislead minors. Sixthly, using AI to engage in online water army activities. Using AI "hosting" technology to operate accounts, imitating human bulk registration, operating social accounts, and mass-producing and publishing low-quality homogeneous content. Illegal trading of accounts. Using AI group control software and social Siasun Robot & Automation for brushing volume and controlling reviews, generating false traffic data, creating false public opinion hotspots. Seventhly, violations in AI product services and applications. Creating and disseminating counterfeit and shell AI websites and applications. Providing services with illegal AI functions such as adult chat, "one-click undressing," AI fortune-telling, or generating synthetic illegal and harmful content. Marketing and promoting illegal AI application services or courses for illegal activities. A relevant official from the Cyberspace Administration of China emphasized that local cyberspace departments should fully understand the important significance of the special action in promoting the standardized and orderly development of AI applications and protecting the legitimate rights and interests of netizens. They should effectively fulfill their territorial management responsibilities, supervise websites and platforms in accordance with the key rectification points, conduct thorough self-inspection and self-correction, comprehensively identify and rectify loopholes, improve long-term governance mechanisms, enhance technical defense and control capabilities, and ensure that the special action achieves practical results. This article is reproduced from the "Cybersecurity China" WeChat public account, GMTEight editing: Liu Jiayin.