Bad Philosophy about Artificial Intelligence

I'm not a trained philosopher, and I've never even managed to make it through a single page on that Stanford philosophy wiki, so take this article with a grain of salt. It's likely that there's a common term for the argument that I lay out in the following paragraphs, and almost certain that quite a few effective counter-arguments are already widely known.


It's common when discussing artificial intelligence to question whether an AI can ever be considered 'human'. Can a computer program have a soul? Does even it matter whether it has a soul — that is, should we treat it as human anyway?

Famously, Alan Turing described what's now called the 'Turing test', a method of gauging a machine's 'intelligence' by judging how well it imitates a human being's behavior. It's tempting to think that, if a machine passes the Turing test, we should consider it to be human. This could come with restrictions, of course — perhaps we would judge the AI's ability to fully and completely imitate a human, not just through text or pictures — so the bar is still considerably high, even in the age of GPT-3 and whatever technology Google is keeping secret. Imagine the technology had gotten so advanced, though, and AI could fully imitate a human. Would you really feel comfortable treating it as a real person?

A common argument against the idea of 'AI with a soul' is that artificial intelligence is, in fact, demonic. While the idea of a demon can vary, most people would agree that demons are something other than human; therefore, if an AI is demonic, it cannot be human. Depending on one's personal beliefs, this may seem either ludicrous, or, on the other hand, completely intuitive. Is AI not an act of incredible hubris; an attempt to discover hidden knowledge; or a profane creation, like a Golem or an orc?

The demon argument can be quite compelling if one has an open mind, but is probably too whimsical to convince many programmers or managerial figures that an AI cannot be human. A more mundane argument is this: In cases such as the Turing test, we use a human as the standard of comparison. Without getting into complex ontological arguments, we shall pretend we know what a human is. Now imagine a human man named Rick. We can say that Rick is inherently human. Is his humanity contingent upon passing the Turing test? No; Rick is a human, simple as that. Now imagine a future where the government is run by a conglomerate of Silicon Valley corporations, called FHGMAN. The government has a new line of androids which must pass the Turing test before being released to the public. If an android fails the Turing test, it is immediately destroyed by a bored bureaucrat; the robot cannot legally be considered a human, so it must not be released to the public. In this way, the AI's 'humanity' can always be brought into question, because it is contingent upon something. Unlike the AI, a human like Rick can do anything, even leak motor oil out of his joins, without losing a shred of his humanity.


There are ways to complicate this argument. What if you were trying to deal with someone who really appeared human, but AI had reached the point where you, just a normal person, had no way of knowing if you were actually interacting with an AI? In some situations, we already have to deal with this on the internet. This sort of practical situation is a difficult one to address, so my argument is more of an abstract or philosophical one. In any case, an AI cannot be human, whether or not we're able to tell.