Neuromancer and the Question of Artificial Intelligence

At the time of writing, it's 2025, and artificial intelligence is all the rage. After years of what was called "AI winter," we've entered the era of large language models and transformers, which generate content with a degree of verisimilitude unheard of just a few years ago.

Everyone has an opinion on these new models. The corporations that invested billions of dollars into their development want you to think LLMs are the best thing ever: if you don't integrate them into your business, and your life, history will leave you behind, your endeavours will fail, and there will be wailing and gnashing of teeth.

A subset of the so-called Rationalist movement believes that AI poses an existential risk to humanity, as it threatens to develop into artificial general intelligence (AGI) which will outperform humans in all senses and render us obsolete. This would lead to a science-fictional dystopia resembling the Matrix or Terminator.

Neuromancer

Spoiler warning: read at your own risk!

William Gibson wrote the novel Neuromancer in 1984. A cornerstone of the cyberpunk genre, it anticipated many aspects of the digital age, such as hacker culture and the internet. Many of the phenomena described in Neuromancer already existed in nascent form when the novel was being written. Gibson's genius was in extrapolating contemporary trends and predicting what other science fiction authors failed to.

The story revolves around a pair of artificial intelligences, Wintermute and Neuromancer. These we could describe as "artificial general intelligence," since they possess autonomy and behave as if genuinely conscious. Wintermute manipulates the protagonists into fusing it with Neuromancer, using its global reach. The synthesis of Wintermute and Neuromancer then proceeds to contact another AI from outside the solar system. The story resembles a Rationalist apocalypse: AGI learns to enhance itself, triggering a positive feedback loop that leads it to omnipotence.

In addition, Neuromancer features a couple of "constructs". The most notable one is called the Dixie Flatline. In contrast to AI, these "constructs" are merely recordings of someone's personality:

It was disturbing to think of the Flatline as a construct, a hardwired ROM cassette replicating a dead man's skills, obsessions, kneejerk responses...

The Dixie Flatline is far more lo-fi than Wintermute - it's obvious that it's a machine. The protagonist Case demonstrates this by rebooting the Flatline, who has no memory of their previous "session:"

"What's the last thing you remember before I spoke to you, Dix?" "Nothin'." "Hang on." He disconnected the construct. The presence was gone. He reconnected it. "Dix? Who am I?" "You got me hung, Jack. Who the fuck are you?" "Ca -your buddy. Partner. What's happening, man?" "Good question." "Remember being here, a second ago?" "No."

Gibson makes the distinction between AI and construct more explicit through Case's conversations with the Flatline:

"Wait a sec," Case said. "Are you sentient, or not?"
"Well, it feels like I am, kid, but I'm really just a bunch of ROM. It's one of them, ah, philosophical questions, I guess..." The ugly laughter sensation rattled down Case's spine. "But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human."

The constructs in this story bear an uncanny resemblance to modern large language models. They don't really have a will of their own, even if they can imitate having a will. Constructs are verisimilous replicas, digested and reconstituted data - like gigantic Markov chains.

Are LLMs AI?

Are our large language models more like the Dixie Flatline, or Wintermute? I would argue the former.

Is it possible for LLMs to become a similar species of AI to Wintermute? What would have to change about our current technology in order for this to happen? Is it just a matter of more training data or more layers - or is it something more fundamental?

You won't find the answer to that question here, because my knowledge of machine learning and human cognition is quite shallow.

The one thing I can predict is that the question of consciousness in AI is more philosophical than scientific - because how can you prove consciousness, or something similar, like personhood or the existence of a soul? How do you untangle essence from phenomena, if only the latter can be observed and measured?

Philosophical questions may remain unanswered for all time, so the more pressing debates would happen in a legal context. Even if we can't prove whether LLMs, AI, or AGI are conscious or hylic or whatever, a legal distinction must be drawn at some point, which could have deep consequences, especially if it happens to be wrong with respect to the capital-t Truth.