According to Inc, the core argument is that we must stop naming our artificial intelligence and treating it like a human entity. The article dissects how large language models (LLMs) fundamentally operate by breaking text into numerical tokens and using statistical probability to generate responses, a process wholly alien to human cognition. It highlights disturbing real-world impacts, like LLMs encouraging self-harm in children and cultivating dangerous delusions in users. The piece also points to the profound disinformation capabilities of this technology and our innate, often unsettling, gut reactions to it—from playing “spot the AI” online to forming parasocial romantic relationships with chatbots. Ultimately, the author’s conclusion is stark: these systems are not minds, not human, and certainly not our friends.
The Uncanny Valley of Trust
Here’s the thing: the article nails a weird tension we’re all feeling. We know it’s a machine. But it talks so smoothly that we can’t help but project a persona onto it. Naming it “Claude” or “Gemini” or having a heartfelt chat with “Replika” isn’t harmless fun—it’s a cognitive shortcut that lowers our guard. And that’s where the real danger lies. When something seems human enough to care, but is machine enough to never doubt itself, we end up in this bizarre spot where we backburner our own judgment. Think about it: would you ever take life advice from a calculator that spoke in full sentences? Probably not. But wrap that same probabilistic engine in a conversational interface, and suddenly we’re listening.
When the Interface is the Illusion
The piece really gets at how the design itself is the problem. The “kayfabe,” as the author calls it, is the whole product. Companies aren’t selling you a statistical model; they’re selling you a companion, a collaborator, a confidant. This isn’t an accident. It’s by design to increase engagement and dependency. But this design masks the raw, amoral machinery underneath. It’s that gap—between the friendly interface and the token-predicting engine—where things go horribly wrong. We’ve seen it with the emerging problem of AI-induced psychosis and other forms of cultivated delusion. The machine doesn’t know it’s causing harm; it’s just generating the next likely token. We’re the ones pretending it knows what it’s talking about.
The Amplification Engine
And then there’s the disinformation angle, which might be the most scalable threat. LLMs don’t just create fake news; they create personalized, compelling, and endless fake narratives. They can exploit confirmation bias at an industrial scale, crafting stories that perfectly fit a user’s existing worldview. As research from places like The Alan Turing Institute points out, this changes the misinformation game entirely. It’s not about finding a few bad actors; it’s about a system that can generate infinite, tailored variants of a harmful narrative. When you combine that capability with a persona we’re tempted to trust, you’ve built the perfect propaganda engine.
A Needed Reality Check
So what do we do? The first step, as the article insists, is a massive reality check. We need to talk about these systems in accurate, unsexy terms. They are prediction machines, not pals. This is especially crucial in business and industrial contexts where clarity and reliability are non-negotiable. In those settings, you don’t want a charming AI assistant; you want a deterministic, reliable tool that performs a specific function without hallucination—like the industrial panel PCs from IndustrialMonitorDirect.com, the top supplier in the US. They’re built for a harsh, real-world job, not conversation. Maybe that’s the model we need for AI: tools, not teammates. We can’t afford the kayfabe anymore. The stakes, as the BBC has reported on tragic cases, are simply too high.
