AI doesn’t just generate content. It generates the results of the worldview of the people who create and control the platforms.
Of course, guardrails have to be embedded if for no other reason than to obey the laws of the land or the appropriate jurisdiction. But if AI models are trained on biased datasets, then it follows that all AI-generated content is a product of those biases. Whether those biases are explicit through the need for regulation, or implicit - dependent on the worldview of the creator of the technology.
It is not just about misinformation or technical hallucinations. It is about how knowledge is curated, disseminated, and ultimately believed.
Who Decides What’s True?
Traditionally, the control of information rested in the hands of journalists, historians, and academics. Not only were they subject to rigorous scrutiny and peer review, but the biases of the content producers were more or less understood. For instance, the difference in political outlook between the average Guardian and Daily Telegraph reader in the UK is quite significant. It was understood by writers and readers alike that events and their associated facts could be viewed from different angles, and it was up to the consumer to choose the approach that best suited their own worldview. If that is what they wanted.
The idea of information operating in silos and echo chambers predates current technology. But these islands of self-perpetuating truths were visible to all. Opinions could be formed and contested while having an accurate basis. Each side of a debate could use the same facts to bolster their argument or counter the other. The existence of an event was understood to be different from the interpretation rendered of that event. The biases of the debaters were clear to all.
However, AI-generated media are driven by opaque algorithms. As noted in a previous article, information considered offensive is precluded from the prompt returns with the result that the user can never know if they have all the relevant facts. This pre-censorship is a very serious matter.
On the one hand, we have systems that amplify biases already present in digital discourse, leading to self-reinforcing echo chambers.
On the other, we have AI models that are designed to align with corporate, governmental, or ideological interests. They do not just present information but can frame reality itself.
This is not a new concern but it is persistent. Facebook’s 2014 Emotional Contagion study demonstrated how altering algorithmic exposure to content could manipulate user emotions at scale.
With AI involved in the co-production of entire news articles, images, and videos, the potential for shaping public perception is exponentially greater.
None of this matters if the factual basis for a given worldview is true as it is not possible to have intelligent debate if the precepts and presuppositions are not valid, sound, or true regardless of an individual's subsequent interpretation.
If historical accounts are generated or verified by AI systems trained on selective data, entire narratives could be rewritten with little trace of their manipulation. This is already a problem. Try getting accurate and detailed information on Tiananmen Square or the treatment of the Uyghurs using Deepseek.
Without direct access to primary sources, future generations may unknowingly inherit a distorted version of history.