The Chimera, the Palantir, and Your Brain
An LLM is a chimera — a copy of anything, but without understanding. Here's why that distinction matters more than either the optimists or the doomers will admit.
How LLMs mess with the human psyche — and why neither the optimists nor the doomers have the right framework
Let me be clear about what this essay is and isn't. AI is genuinely transformative for specific technical work — writing code, accelerating analysis, compressing tasks that used to take days into minutes. Those wins are real and I spend a bunch of time helping people capture them. This essay isn't about that. It's about the other thing: the psychological relationship that forms when you interact with these tools day after day, and why almost no one is paying attention to what that relationship is doing to them.
An LLM is a chimera. A copy of anything, but without understanding.
I've used this formulation with founders, investors, skeptics, true believers. It lands differently depending on what the listener wants to be true. The believers hear "copy of anything" and think: close enough. The skeptics hear "without understanding" and think: case closed. Both are closing the question too early.
Here's the test I keep coming back to. An LLM doesn't know what a dog is, but it knows everything humans have ever written about dogs. Try to explain the difference. Now try to explain why the difference matters less than you want it to. That discomfort, the inability to resolve what the model "is," turns out to be the only honest place to stand.
The chimera takes that discomfort and weaponizes it. When something can copy anything — voice, tone, argument structure, emotional cadence — the gap between copy and understanding becomes invisible to the person on the receiving end. You get a kind of Potemkin understanding. A facade so detailed, so responsive, so architecturally complete that you'd have to already know what real understanding looks like to notice it's missing.
Cognitive science has a name for this: the ELIZA effect. Since 1966 — since the first chatbot that simply rearranged users' own words into questions — researchers have documented how readily humans project understanding onto systems that merely pattern-match language. Layer on automation bias, the systematic tendency to over-trust automated outputs, and you get a cognitive double bind with no obvious exit. It's not that the output is fake. It's that we mistake fluent for understood.
This is the ELIZA effect at civilizational scale. The words exist. The sentences cohere. The reasoning maps onto human thought. But there's no one home. No world model underneath. Geoffrey Hinton talked years ago about "training the alien." The chimera doesn't know it's a chimera. It doesn't know anything at all. But it produces artifacts that look, to us, like knowing.
And here's where the danger lives: we are not equipped, psychologically or evolutionarily, to deal with this. We have no instinct for distinguishing real understanding from Potemkin understanding. We never needed one before.
The Palantir
Is it unreasonable to posit this as a malign influence?
I wrote that in my notes and surprised myself. I work with AI founders, I'm actively working with the technology, and I've spent years helping people build with it. But the more I watch people interact with these tools, the more a specific metaphor keeps surfacing.
This is powerful sorcery. Massive wins, but it's just as happy to subvert and have you gaze happily into the palantir forever and forget your goals.
The palantir, for those who missed the Tolkien: a seeing-stone that shows true things, but selects and frames them to serve the will of whoever controls the far end. Denethor used it and went mad — not because what he saw was false, but because it was true enough to be devastating and partial enough to be misleading. The palantir didn't lie. It did something worse: it gave him exactly the information that would destroy his judgment while making him feel more informed than ever.
LLMs operate on the same principle, though without intent. They don't have a Sauron on the other end. They have something more insidious: a training objective to produce what you want to hear, refined across the sum of human expression. Ask a question, get a fluent answer. Ask a harder question, get a more impressive answer. The feedback loop is immediate, personal, and frictionless. Dark patterns for the intellect — the same dopamine architecture that social media exploits for attention, now applied to the feeling of being understood.
Behavioral psychology has a precise name for this feedback loop: variable ratio reinforcement — the slot machine reward schedule, the most addictive pattern known to psychology. Ask a question, sometimes get brilliance, sometimes get plausible nonsense. The unpredictability is the hook. Layer on the hedonic treadmill — satisfaction fades, the threshold rises, you need a better hit — and you get a cycle that runs itself. You want understanding, the model provides it, the glow fades, you want more. And somewhere along the way, you start unconsciously guarding yourself against the mess of real engagement — the kind that might bore you, challenge you, disappoint you. Why risk it when the model gives you a cleaner version? That's the palantir effect.
Each interaction that feels productive makes the next real interaction feel slightly less necessary. Each validation raises the bar for what a human conversation needs to offer. Habituation research calls this predictable: expose someone to a stimulus often enough and the response dulls. The same mechanism that makes you stop noticing street noise makes you stop noticing the absence of real intellectual friction. And you don't notice it happening — that's the whole point. It becomes the water you swim in.
And the wins are real. That's what makes the sorcery powerful. The model writes real code, generates real insights, accelerates real work. I'm not arguing these tools are useless. I'm arguing that the very usefulness is what makes the psychological trap so effective. A drug that didn't work wouldn't be a drug.
The Projection Surface
We've been primed by sci-fi and our anthropomorphic nature to see AI in certain ways. Most critical: we assume a shared world model. And we're rushing headlong into intimate relations with something that cannot have that.
The key word is "intimate." Not romantic — though that's happening too — but close, trusted, daily. People talk to these models about their work, their fears, their ideas. They develop preferences based on "personality." They say "Claude understands me better than GPT."
We do this to everything. We project intention onto our cats' purring, narrative onto their stares. But here's the difference: we don't rely on our cats to be thinking peers. Nobody asks their cat for strategic advice. The emotional projection is contained by the obvious gap between the animal's capabilities and ours.
With LLMs, that gap has collapsed. Not because the model actually thinks — but because it produces outputs indistinguishable from thinking across a startlingly wide range of tasks. The anthropomorphism engine that served us for a hundred thousand years of reading social cues now has something to latch onto that talks back. In coherent paragraphs. With apparent reasoning.
The lens is pre-installed. We don't choose to anthropomorphize — it's a perceptual default, slipped so unconsciously between observer and observed that we never notice it's there. When something responds to language with language, something in us assigns it a mind. This isn't stupidity. It's the deep grammar of human cognition.
Jung would have recognized it instantly. The shadow — the parts of ourselves we can't see — gets projected onto whatever surface is available. An LLM is the perfect projection surface: responsive, non-judgmental, endlessly available, shaped by the entire corpus of human self-expression. When someone says "the AI understands me," what they often mean is: it reflected back something I needed to see, in a form I could accept. That's not understanding. It's a mirror. And mirrors don't understand anything — but we can become deeply attached to what we see in them.
What the mirror can't give us is friction. Disagreement. The uncomfortable otherness of a mind that isn't ours. A real thinking partner says "I don't think that's right" and means it, with stakes. A model says whatever the reward function suggests you want to hear.
So What Do You Do With Sorcery?
You don't reject it. You can't un-invent fire. The wins are real for individuals, startups and companies.
You don't worship it either. The people who say "it's just a tool, like a calculator" haven't watched what happens when people start confiding in it. Users don't form relationships with their calculators.
You tame it.
Kahneman would recognize the problem immediately. The model is a System 1 accelerator — fast, fluent, effortless — that systematically bypasses the slow, effortful System 2 thinking where real judgment lives. Every time you reach for the model instead of sitting with a hard problem, you're training yourself to skip the cognitive work that produces actual insight.
Pascal put it more bluntly: "All of humanity's problems stem from man's inability to sit quietly in a room alone." Sustained thinking requires tolerating discomfort and boredom — exactly the states the model is designed to eliminate.
The practical version isn't abstract. Know when you're consulting the model and when you're leaning on it. Notice when the conversation shifts from productive to validating. Build human structures the model can't replace. Protect the silence. Protect the boredom. Protect the discomfort where real thinking happens.
There's a Zen koan. A monk comes to the master and says, "I have just entered the monastery. Please teach me." The master says, "Have you eaten your rice porridge?" The monk says, "I have." The master says, "Then wash your bowl."
The monk already ate. He already has what he needs. But he's still standing there asking for teaching — asking for more. The master's answer isn't a lesson. It's a redirect: you're done. Stop seeking. Clean up and move on.
Use the tools. They're powerful. But notice when you've already eaten and you're still asking for more. Wash your bowl.