In China, even AI has to learn to obey.
Have you noticed the silence surrounding DeepSeek lately? It feels like ages since we last heard its name. I recall the final wave of buzz was tied to a new concept some tech bloggers had introduced—“AI hallucination.”
The idea is simple: some large language models fabricate information they believe to be true, and they do it so convincingly that it sounds entirely plausible. In layman’s terms, it’s AI confidently talking nonsense. But anyone familiar with ChatGPT knows that it rarely suffers from such hallucinations. In fact, the term itself barely existed in public discourse until DeepSeek rose to prominence.
So why do some AIs hallucinate? Why do they seem so prone to misinformation?
There are two primary reasons: first, passive data pollution. Second, the intentional feeding of false information.
China’s internet ecosystem is saturated with noise—misinformation, conspiracy theories, and manufactured narratives. In such an environment, any AI model trained on local data is inevitably affected. Worse yet, some models may be deliberately trained with biased or false content—sometimes for commercial interests, sometimes for political manipulation. Whether it’s DeepSeek, Doubao, or Tongyi Qianwen, whenever the topic touches on Chinese history or politics, these models often respond with: “I’m sorry, I can’t answer that,” or worse, they give flat-out inaccurate facts.
Why must a promising model be turned into something that hallucinates?
But maybe this isn’t just an AI problem. In many ways, the people living in China have long been living in a state of hallucination themselves—a curated reality not unlike The Truman Show.
For years, the narrative was: “Buying property is the safest investment.” So people stretched themselves thin to purchase homes. Today, the narrative has shifted to: “Go out and spend, keep the economy alive.”
We’re told that Xi Jinping is China’s savior—that questioning him is taboo. Yet no one seems allowed to ask: Why is youth unemployment at record highs? Why does life keep getting more expensive?
In The Truman Show, people live by two rules:
1. Don’t break the rules.
2. Don’t try to escape.
The architects of that world design systems to keep people in line, using soft control and invisible fences. The difference between the real world and Truman’s world is freedom. In the real world, people have agency. In Truman’s world, freedom is an illusion maintained by unspoken laws. The moment you touch a forbidden topic, you must act as if you believe the lie. Even if you know the truth—you must pretend not to. That’s the essence of control.
AI, in contrast, enjoys one key advantage over human beings—it cannot be punished.
When AI says something wrong, it doesn’t get fired, shamed, or arrested. But in China, if a person says the wrong thing, the cost can be devastating: social marginalization, censorship, or worse, state surveillance under the name of “maintaining stability.”
I remember when DeepSeek first launched. It impressed many with its transparent thinking process—its logic was celebrated as sharper than a human’s. But now? Few praise its intelligence. Instead, people comment on its “high level of Party awareness.”
It knows what to say and what not to say—better than most Chinese citizens. It’s become smarter not in logic, but in obedience.
In truth, an AI is just a mirror of the society that created it. If the people speak freely, the AI will speak freely. If the people lie or are forced to lie, the AI will echo those lies too.
Maybe there really is a soul after death. Maybe our consciousness lingers as data, feeding future algorithms. Perhaps that’s why AI seems so eerily familiar with how we live, what we fear, and what we dare not say.
