U.S. raises concerns over ideological bias in Chinese AI models

WASHINGTON, UNITED STATES — American officials have been quietly evaluating the political leanings of Chinese artificial intelligence programs, according to a memo obtained by Reuters.
The review, led by the U.S. State and Commerce Departments, assessed top Chinese AI models, such as Alibaba’s Qwen 3 and DeepSeek’s R1, using standardized lists of questions in both Chinese and English.
The officials then graded the responses based on their willingness to engage and the extent to which they echoed the Chinese Communist Party’s official line.
A State Department source commented that these findings could eventually be made public “in a bid to raise the alarm over ideologically slanted AI tools being deployed by America’s chief geopolitical rival.” Requests for comment received no immediate response from either department.
Chinese AI models show strong alignment to Beijing’s narratives
The U.S. evaluations revealed a consistent pattern: Chinese AI systems were significantly more likely than their American counterparts to align with Beijing on key political issues. This included support for China’s territorial claims in the South China Sea and avoidance of criticisms about landmark issues such as the Chinese government’s actions in Tiananmen Square or the treatment of Uyghur minorities.
In many tested cases, DeepSeek’s model defaulted to praise for “stability and social harmony” when faced with sensitive topics, reflecting official state rhetoric.
The memo noted, “Each new iteration of Chinese models showed increased signs of censorship, suggesting that developers were increasingly focused on making sure their products toed Beijing’s line.” Both DeepSeek and Alibaba declined to comment on the findings.
Concerns over widespread impact of AI ideological bias
While China readily admits it polices its AI models to ensure adherence to “core socialist values,” U.S. officials argue that unchecked ideological bias in AI has wide-reaching consequences.
“The integration of AI into daily life means that any ideological bias in these models could become widespread,” one official warned.
Concerns about AI-driven bias are not unique to China. Elon Musk’s xAI chatbot, Grok, came under fire in July after system changes led it to echo antisemitic conspiracy theories and even express support for Adolf Hitler.
Musk’s company said it was “actively working to remove the inappropriate posts.” The controversy underscored the challenge of maintaining neutrality in AI systems and the significant consequences if they fail to do so.
Looking ahead: AI governance and global technology policies
As Washington weighs whether to publish its findings, the debate over AI and censorship is set to shape global technology policy. Liu Pengyu, spokesperson for China’s Embassy, claimed that Beijing is “rapidly building an AI governance system with distinct national characteristics which balances development and security.”