
Artificial intelligence is not just a computational leap. It is a structured attempt to model the functions of the human brain in code. As machines learn to process, store, see, and act, they are redrawing the boundaries of cognition. However, Tian Xia believes the differences that remain may be just as important as the similarities.
A few months ago, my two-year-old son developed an unexpected fascination: car brands. Neither my wife nor I are particularly interested in cars, but our toddler had begun spotting and naming them with impressive confidence. It began with, “What brand is this?” Then came the guesses: “Is this a Volvo?” Soon, he was declaring with certainty: “That’s a Volvo!” He was doing what human brains are wired to do-absorbing sensory input, forming categories, correcting errors, and building a mental model of the world. Just as my son’s brain is essentially training his own classifier, recent advances in Artificial Intelligence (AI) have allowed computers to do similar things and to incrementally replicate human cognitive functions-albeit in lines of code rather than neurons. But while the mechanisms appear similar, the differences run deep-and so do the implications.
From memorisation to generation
Early AI models, such as IBM’s Deep Blue in the 1990s, operated with rigid, pre-programmed rules, and were brilliant at narrow, structured tasks. They could calculate chess moves, optimise logistics, or solve equations. But they couldn’t hold a conversation or write a story. They lacked context. They lacked meaning. It was like a child who memorises multiplication tables without grasping the underlying mathematical principles. The breakthrough came with generative AI. Large Language Models (LLMs) demonstrated an impressive ability to process unstructured data—text, images, even video. This marked a significant leap from simple memorisation to contextual understanding and even the generation of new unstructured data. This was like a child who, after thousands of bedtime stories, begins crafting their own.
Yet despite the breakthrough in understanding unstructured data, the underlying process is still based on the manipulation of numerical values. Words are simply represented by high-dimensional vectors that become readable for graphic chips, and sentences are simply the result of continuously “predicting” the next word with the highest probability. This also means that the machine doesn’t know why the output matters. It predicts patterns, but it doesn't grasp purpose.

Fast thinking, slow thinking
The late Daniel Kahnemann was a psychologist and Nobel laureate renowned for his pioneering work in behavioural economics and cognitive psychology. His best-selling book – Thinking, fast and slow – popularised the framework of two thinking systems, which also helps illustrate where AI excels-and where it still falls short. Whereas System 1 is responsible for fast, intuitive, and often subconscious thinking, System 2 is slow and analytical, but requires a deliberate effort by our consciousness to be activated. LLMs operate like System 1; they finish our sentences, summarise documents, suggest code. But they struggle with multi-step reasoning. To address this, researchers developed "chain-of-thought reasoning," encouraging models to break down complex prompts step-by-step.
These so-called reasoning models are an early form of deliberate and deep thinking similar to System 2-less about speed, more about transparency and accuracy. For example, when solving a math problem like “If Alice has five apples, gives two to Bob, and then buys four more, how many does she have?”, a reasoning model generates intermediate steps to come to the solution. This mimics how students are required to show their calculation steps to get full points in a math exam, all while reducing errors and increasing transparency. The shift from instant answers to deliberate, self- correcting processes aims to bridge the gap between statistical pattern-matching and genuine reasoning. Still, no LLM today is capable of true reflection yet. It doesn't know when it's wrong. It doesn't know that it knows.
Embodied AI and the real world
If cognition is the mind, then robotics is its body. Embodied AI seeks to combine thinking with doing. Tesla’s Optimus, Boston Dynamics’ Atlas, and Unitree’s H1 humanoids are attempting this fusion. These machines can now walk, navigate, lift, and even jump. Atlas recently demonstrated autonomous object manipulation, adjusting for misalignment when placing heaving components. Optimus can locate its charging dock and plug itself in. Unitree’s H1 has broken records for humanoid running speed. But these are still controlled, structured tasks. A deceptively simple act—like gently picking up a ripe tomato—remains elusive. Robots struggle with softness, irregularity, and subtle cues. Humans, shaped by two million years of evolution, instinctively calibrate grip, texture, and fragility. A robot might just crush it. This is not a minor detail. It's a window into what intelligence really means. Our brains evolved under physical, social, and emotional pressures. AI has evolved over 70 years, with modern transformers existing for less than a decade.
"AI learns fast. But it hasn’t learned for long."

When the map breaks
Every AI system, no matter how powerful, has blind spots. These emerge in unpredictable environments, when inputs shift-what researchers call "distribution shift." In robotics, this might mean a package is slightly out of place or a shelf is tilted. In such cases, even a cutting-edge robot may freeze or fail. Humans adjust instinctively. We don't just process the map-we adapt when the map is wrong. The human brain is deeply plastic. It can learn from one example, retain memories over decades, and generalise across contexts. AI, by contrast, requires massive data and may still forget what it once knew (a phenomenon called "catastrophic forgetting". We also think differently. Humans reason causally. We form theories, run mental simulations, and learn through analogy. AI is superb at correlation, but struggles with concepts. It cannot yet invent meaning.
Emotion, context, and common sense
Another divide: emotion. Humans think with feeling. Our choices are shaped by empathy, fear, curiosity, and love. AI does not feel. It can simulate comforting language, but it doesn’t understand comfort. This matters because our cognition is emotional. We remember what moved us. We prioritise what matters to us. And in social settings, we read between the lines. We infer intentions. We grasp unspoken cues. AI lacks Theory of Mind—the understanding that others have beliefs, perspectives, and inner lives. It can mimic dialogue, but not meaningfully engage. In a world built on collaboration and trust, this is more than a technical limitation. It's a philosophical one.
A complement, not a clone
AI has mapped many of our cognitive functions. It processes, stores, sees, and acts. But its map is only a partial one. The human mind evolved slowly, shaped by uncertainty, adaptation, and purpose. AI evolves quickly, but with no intrinsic sense of what matters. The ultimate goal of replicating the human brain in silicon may remain elusive, but that’s not necessarily a failure. The future doesn’t belong to AI that thinks like us, but to AI that thinks with us. Coming back to Kahnemann’s thinking framework; as LLMs become more mature, AI can help us remove the mental burden of System 2 thinking, similar to how a calculator helps us with tedious arithmetic. Tools that complement, not replicate. Partners that help us scale our strengths while respecting our differences. As my son grows, he’ll undoubtedly develop interests and skills that surprise me further. AI, too, will evolve in unpredictable ways. But the goal is not to build a machine that replaces human thought. It's to build systems that illuminate it. As the late American scholar Alfred Korzybysk once said: “the map is not the territory.” And we still have so much to explore.

Tian Xia
Portfolio Manager, Equity at Cape Capital