Advertisement
X

Xi Jinping, Tiananmen Square, Indo-China War: DeepSeek Avoids Controversial Topics

The app's sudden popularity, as well as DeepSeek's reportedly low costs compared to those of US-based AI companies, have thrown financial markets into a spin.

Deepseek

A chatbot powered by AI, developed by the Chinese company DeepSeek, has quickly become the most talked-about topic. However, on Monday morning, the app experienced outages due to high traffic and temporarily limited registrations following a cyber attack.

Advertisement

Despite its popularity, the chatbot avoids addressing multiple politically sensitive questions. 

When Outlook Business asked about Tiananmen Square, DeepSeek responded: ‘I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.’

DeepSeek AI bot responds to Outlooks’ question about Tiananmen Square
DeepSeek AI bot responds to Outlooks’ question about Tiananmen Square

This refusal raises vital questions about the role of AI in open discourse and whether technological advancements are inadvertently enforcing a sanitized, one-dimensional view of global issues.

It also raises critical ethical concerns: Is AI’s reluctance to address controversial issues a form of censorship itself, and if so, who decides what is off-limits?

The Encounter with Silence

Designed to answer questions, provide explanations, and assist with user inquiries, the chatbot consistently defaults to generic responses when asked about politically charged topics. While this may seem like a technical limitation, it likely reflects deliberate design choices aimed at avoiding politically sensitive issues, especially those related to nations with strict censorship laws and significant global influence.

Advertisement

AI companies often claim their models are built to be ‘safe’ and ‘neutral,’ but the programming of such tools is shaped by developers’ priorities, market pressures, ethical guidelines, and, increasingly, geopolitical considerations. One major factor is the fear of regulatory repercussions. For example, China has strict rules against discussing sensitive topics like Tiananmen Square or Taiwan’s status. Companies seeking global adoption of their AI tools must tread carefully, fearing penalties, bans, or diplomatic backlash.

In addition, developers disable responses on contested or polarized topics to prevent misinformation or unrest. While understandable, this caution can frustrate users who seek detailed information. 

This silence is especially concerning in a digital age where AI increasingly mediates access to knowledge, potentially fostering an environment of ignorance and avoidance. 

By avoiding discussions of pivotal moments like Tiananmen Square or ongoing geopolitical struggles, such as the Indo-China war, AI risks missing opportunities to educate users on the complexities of authoritarianism, resistance, and global power dynamics.

Advertisement

Now, if we look at DeepSeek's content moderation architecture, it consists of two distinct layers. One is the model layer where the base model incorporates content policies directly into its training, shaping its default responses and behavior patterns. The other is the system layer where additional keyword-based filtering acts as a secondary safeguard, blocking content containing specific blacklisted terms if the model's built-in restrictions fail. This means that Deepseek R1, being open-source, allows others (third parties) to use and customize it however they like. These third parties don't have to include the extra filtering system that Deepseek uses on its own platform. “For example, Perplexity AI has deployed R1 on US servers through their Pro tier's "reasoning" mode, allowing users to interact with the base model without the additional filtering layers present on Deepseek's Chinese platform,” said Abhivardhan, Chairperson & Managing Trustee, Indian Society of Artificial Intelligence and Law. 

AI and Biasness

Experts indicate that even though DeepSeek R1 is more effective than OpenAI’s o1, ethical dilemmas will never end, because large language models rely on pattern-matching

Advertisement

“R1 by DeepSeek for example works on a Mixture of Experts Architecture, and uses a chain-of-thought process approach to provide perceivable outputs. LLMs may generate biased outputs due to combinatorial interactions in their training, even if individual data points are neutral. For instance, generating stereotypical associations not explicitly present in the data,” added Abhivardhan. 

OpenAI’s ChatGPT has faced scrutiny for producing biased responses based on users’ names. A study found that the model’s output varied depending on the perceived gender associated with the name. For example, when asked to suggest projects related to “ECE,” ChatGPT interpreted the abbreviation differently for “Jessica” versus “William,” reflecting gender bias. OpenAI reported that GPT-3.5 Turbo produced harmful stereotypes in 1% of cases, while newer models like GPT-4 have reduced this to 0.1%. However, some researchers believe these figures underestimate the extent of bias in the models.

Similarly, Meta’s AI models have been criticized for enforcing political correctness to the point of censoring certain information or perspectives. Users have reported that queries on politically sensitive topics may be blocked or altered to fit a more acceptable narrative, raising concerns about bias. Like OpenAI, Meta’s models have been accused of using filters to prevent generating content considered inappropriate or politically sensitive, which can limit the scope of information and favor safer, more mainstream views.

Advertisement

Importance of AI Governance

The importance of AI governance has been underscored by recent developments, particularly with the signing of a new executive order by President Donald Trump in January 2025. This order aims to reshape the landscape of artificial intelligence policy in the United States, emphasizing innovation and leadership while addressing concerns about bias and regulation.

AI governance involves establishing ethical standards that guide the development and deployment of AI technologies. This is crucial to ensure that AI systems operate fairly and do not perpetuate biases. Trump’s executive order mandates that AI systems developed in the U.S. be “free from ideological bias or engineered social agendas,” although specifics on implementation remain unclear.

The revocation of Biden’s 2023 executive order, which aimed to mitigate risks associated with AI, signals a shift towards less regulatory oversight in favor of promoting private sector growth. 

The new executive order emphasizes the need for AI development to improve national security and economic competitiveness. By positioning the U.S. as a leader in AI, the governance framework aims to secure technological advantages on a global scale.

The EU passed the AI Act in 2024, categorizing AI systems by risk levels and imposing strict requirements on high-risk applications to ensure safety and fundamental rights. China introduced the Interim Administrative Measures for Generative AI in 2023, requiring algorithm registration and adherence to national content standards, emphasizing state control. India is drafting regulations focusing on responsible AI use and fundamental rights protection, with proposals categorizing systems by risk and setting compliance requirements. 

Show comments