Artificial intelligence (AI) is everywhere in modern content creation, scientific research, and the entrepreneurial development of business ideas. Many assume that because AI can generate coherent text, produce art, or even solve complex problems, it must understand what it is doing. But does it? Since AI operates on statistical prediction rather than genuine comprehension, how can there be a claim of AI-generated insight, and can mimicry ever be considered as true intelligence?
AI as a Prediction Engine
At its core, AI functions as an elaborate, complex, and sophisticated pattern recognition system. While humans are also masters of pattern recognition, the two processes are not comparable at a fundamental level. Machines do not think in the way that humans do. As the AI program is just a series of ones and zeros flipping off and on at incredibly high speeds it is essentially unilinear in operation. In the abstract on their paper introducing the idea of ‘Psychomatics’ Giuseppe Riva and others make the following observation that serves to distinguish human and computer processing.
"Human cognition draws from multiple sources of meaning, including experiential, emotional, and imaginative facets, which transcend mere language processing and are rooted in our social and developmental trajectories. Moreover, current LLMs lack physical embodiment, reducing their ability to make sense of the intricate interplay between perception, action, and cognition that shapes human understanding and expression."
The high-speed probabilistic approach of AI allows it to produce human-like responses, but it does not mean the system understands meaning, intent, or context in the way a person does.
When an AI system is trained to generate poetry it can produce lines that resemble the work of great poets such as Shakespeare or Dickinson. It is the product of predicting which words are most statistically appropriate based on its training data. According to Margaret Boden, the system lacks an inner world. In chapter 10 of her book 'The Creative Mind: Myths and Mechanisms' she argues that while AI can imitate aspects of human creativity by generating novel combinations of data, it lacks genuine understanding and intentionality. This distinction points to the limitations of AI when compared to the breadth and depth of human creative processes.
The Limits of AI-Generated Insight
Human intelligence requires abstraction, conceptual reasoning, and the ability to come up with new ideas that are not simply logical extrapolations of previous ones. AI systems struggle with these tasks. They can provide impressive-sounding outputs, but they do not originate ideas in the way that a human mind does. Gary Marcus points out in ‘The Next Decade in AI’ that AI systems often work very well when applied to the exact environments on which they are trained, but cannot be counted on if the environment differs. They struggle to generalise and originate ideas.
Likewise, in AI-generated art: while the results may be visually compelling, they often lack the intentionality and emotional depth that define human creativity.
Mimicry vs. Understanding
AI seems intelligent because of its ability to simulate human behavior in a way that can seem convincing to the human observer a lot of the time. This observation goes back to Alan Turing’s 1950 paper 'Computing Machinery and Intelligence' in which he proposed the "imitation game" which we now call the Turing Test. If a machine could convincingly mimic human responses, he argued, we should accept it as intelligent .
However, John Searle, a philosopher, challenged this idea with his 1980 'Chinese Room Argument.' In this thought experiment, a person inside a room, following a set of rules, could manipulate Chinese symbols well enough to produce convincing responses, even without understanding Chinese. Searle argued that there was no difference between this behaviour and what AI does. In both cases, symbols are processed without comprehension of their meaning.
Modern AI systems support Searle’s argument. They have no beliefs, intentions, or experiences. It is merely an advanced parrot, repeating and remixing what it has seen before.
Does it matter?
But if it produces useful results then AI’s lack of true intelligence is irrelevant. Why does it matter whether or not it understands what it is doing? This pragmatic view holds weight in many cases. AI is already revolutionising industries, automating repetitive tasks, and augmenting human creativity in unprecedented ways.
The problem is anthropomorphism. As humans, we are constantly attributing human traits, emotions, intentions to pretty much anything, whether animate or inanimate. The number of times that I have gotten angry at some of the silly outputs that ChatGPT and other systems have provided is just ridiculous. It cares nothing for my frustrations, and getting upset is an utter waste of time and energy, but I can't stop myself.
What compounds the frustration is the misapprehension that it is an entity that knows things. The issue of AI hallucinations demonstrates the danger of mistaking mimicry for comprehension. The concern is that if we rely on AI to make decisions in fields such as medicine, law, or governance, on the assumption that it has genuine reasoning capabilities could lead to catastrophic mistakes.
AI remains fundamentally different from human intelligence. It excels at identifying patterns, generating plausible text, and producing statistical approximations of creativity, but it does not understand in the way humans do. But the debate between Turing and Searle remains relevant: Is intelligence merely about performance, or does it require genuine comprehension?
While AI can produce valuable insights and assist in creative and intellectual work, we must remain clear about the boundaries between extremely clever mimicry and true intelligence.