China demands public generative AI conform to “core values ​​of socialism”


China appears to be accommodating its AI industry by softening its AI policies for industrial use, but not for generative AI in the public sphere. While this could give the public access to less powerful generative AI, the focus remains on protecting the socialist value system.

China has issued new guidelines for generative AI services that still restrict public use for political reasons, while encouraging industrial development. The Cyberspace Administration of China (CAC) has softened its stance on the April draft rules, a general sign that the government wants to encourage AI development.

The overall tone is milder than the first draft in April, Reuters reports, and only organizations that offer AI systems to the public will have to go through a security review process. Instead of setting lofty goals that must be met in every case, the rules now require companies to develop effective measures to meet those lofty goals.

The April rules said that each model would have to undergo a government safety review and threatened fines of up to 100,000 yuan ($14,027) for violating the rules, a threat that is now gone, according to CNN.


Generative AI must conform to socialist values

However, organizations that offer generative AI services such as text and image generators to the public must still ensure that the generated results are in line with the Chinese government’s ideas.

Generative AI services must adhere to the “core values ​​of socialism” and not attempt to subvert state power or the socialist system.

In addition, training data must come from legitimate sources and not violate intellectual property rights. Other rules address issues such as avoiding discrimination, human rights, transparency, and labeling of AI-generated content.

The new transitional rules will go into effect on August 15.

Is China tripping itself up in the AI ​​race?

The socialist nature of public generative AI is likely to pose a fundamental challenge to the developers of these systems. Either they will have to accept constraints in the selection of training data, or they will have to try to constrain a fully trained AI model later through policies and censorship systems without the model losing performance.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top