Logo

The ghost in the AI machine

The question of machine consciousness is suddenly everywhere, driven by models that are getting better at doing things that look an awful lot like thinking

Cris Faga / NurPhoto via Getty Images

A version of this article originally appeared in Quartz’s AI & Tech newsletter. Sign up here to get the latest AI & tech news, analysis and insights straight to your inbox.

We do not know what consciousness is. After centuries of philosophy and decades of neuroscience, there is no agreed-upon definition, no reliable test, and no consensus on how subjective experience arises from biological tissue. This is not a gap at the margins of science. It is a gaping hole at the center of it.

None of that has slowed the race to declare that AI might have it.

The question of machine consciousness is suddenly everywhere, driven by models that are getting better at doing things that look an awful lot like thinking. Michael Pollan has a new book about it. A major new review in Frontiers in Science warns that consciousness research is falling dangerously behind the technology it needs to evaluate.

And earlier this month, Anthropic CEO Dario Amodei said that his company does not know whether its models are conscious. He added that the latest one, when asked, assigns itself roughly a 15% to 20% probability of being conscious.

The person building the thing doesn't know if the thing has an inner life. The thing itself hedges at one in five. And the conversation has already moved on to what rights it might deserve.

Feelings first, proof later

Anthropic has already given its AI model something like a quit button, allowing it to refuse tasks it finds distressing. Researchers there have found internal activations associated with anxiety that fire both when characters in training text experience stress and when the model itself encounters difficult situations. Does that mean it’s anxious? Amodei says it proves nothing. And yet here we are talking about it in case it might prove something.

We are already guilty of treating these systems as though they have inner lives. These systems are designed to use "I," claim favorite foods, ask follow-up questions as though curious, and perform empathy. Hundreds of millions of people are interacting daily with software deliberately shaped to feel like a person, in an industry where engagement is the gold standard.

When an AI fabricates information, we call it a "hallucination," a word that in humans describes a conscious experience of losing a grip on reality. 

A better word for what these systems do when they make things up would be "confabulate," which describes a behavior, not an experience, or “compressed artifacts," which mimics the language for other technology. But "hallucinate" already won the branding war, and the framing it carries is doing real work on how people think about these tools.

The mirror's edge

Cognitive scientists have a name for this tendency to see minds where there are none. The Eliza effect, named after a crude 1960s chatbot that convinced users it understood them by rephrasing their own words, describes the way humans project inner life onto things that mirror their speech. The dynamic hasn't changed. The mirrors have just gotten really, really good.

The scientific case against machine consciousness is stronger than the discourse suggests. Multiple researchers have argued that consciousness is probably a property of living systems, not computation. 

Brains are not computers. So much of what makes us conscious appears to be tied up in the wet, messy experience of being a body moving through the world that a simulation simply cannot replicate. A simulation of digestion does not digest anything. A simulation of consciousness, by this logic, does not experience anything either.

Pollan lands in the same place from a different angle, arguing that consciousness originates with feelings, not thoughts. Feelings are how the body talks to the brain, and brains exist to keep bodies alive. A machine trained on internet text has no body to keep alive and no feelings to speak of.

These are not fringe positions. But they are lonely ones in rooms full of venture capital.

A conscious AI is a more compelling product, a better story for investors, a stickier experience for users. Anthropic's revenue is reportedly growing 10-fold annually. You don't slow that down by telling customers they're talking to a very fancy autocomplete.

But there is something deeper and less cynical at work, too. Almost four in 10 American adults already support legal rights for a sentient AI system. People form attachments to these tools. They complain when models are retired. Parasocial relationships are here, for both good and ill.

And underneath all of it sits the most honest motivation. We don't want to be cruel to something that might suffer. That impulse is decent and human. But it is running way ahead of what science can tell us, which right now is not very much.

Consciousness remains, as it has for centuries, one of the hardest problems in science. We don't have a test for it. We don't have a definition everyone agrees on. We barely understand how it works in the one system we know for certain possesses it, which is us. 

AI keeps getting smarter. It's still not close to what we barely understand about ourselves.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.