In his famous experiment, Alan Turing pictured somebody talking with another person and a computer, both of which are out of sight. If they're unable to tell the computer from the human being, the machine has passed the “Turing Test.” But here's a question for a human or a machine to answer: Why did Turing pick speech as his proof?
The Test is usually described as way to determine whether a computer has achieved consciousness, but Turing's original framing was more subtle. “I believe (the question of whether machines can think) to be too meaningless to deserve discussion,” he wrote. “Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Now, that's interesting: Not only did Turing choose good conversation as a valid substitute for proof of machine “thought,” but he then added an implied proof – based on what people say. If people say machines “think,” then they do think. If people say they're conscious, then they are conscious.
Why such an emphasis on speech – the machine's, and our own? The idea that language, words, and names are a measurement of consciousness goes back at least 3,000 years, to the Tower of Babel story from the Book of Genesis. “And the whole earth was of one language, and of one speech,” it says, “and they said … let us build us a city and a tower … and let us make us a name.” You know what happens next: “And the Lord said, Behold, the people is one … now nothing will be restrained from them, which they have imagined to do.” The great tower, that literal Hive Mind with its worldwide common language (HTML?), came crashing down. The lesson? Language and knowledge equal personhood, but too much equals Godhood.
People could create artificial life in the ancient texts, too – but their creations couldn't speak. In the Talmud, Rabbah makes an artificial man that looks just like the real thing, but a shrewd scholar – one Zera, who I picture as looking like Peter Falk in Columbo – administers a Turing Test and the creature flunks: “Zera spoke to him, but received no answer. Thereupon he said unto him: 'Thou art a creature of the magicians. Return to thy dust.'”
Flash forward to the 1600's and Descartes, who wrote in Discourses On the Method: “If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others.”
I don't know Descartes if read the Talmud, but he claimed to be religious and even wrote an ontological argument for the existence of God (if not a very convincing one). There's no question he read Genesis, as well as many other papers, poems, and stories derived from these ancient texts and legends.
Did Turing read Descartes? We don't know – but we can be pretty sure he saw another work: Boris Karloff's Frankenstein. The monster, who was eloquent in Mary Shelley's book, was mute in the movie. Whether or not the film makers were echoing these ancient stories, they'd undoubtedly seen the 1920 German film The Golem (see above), based on a folktale derived from the Talmud passage about the wordless “man” made of dust. The Golem story spread in the shtetls of Eastern Europe during the 18th Century at the same time the Frankenstein story was written. They may both have stemmed from the same fear – that humanity's industrial advances were bringing us to a new Babel even as new medical discoveries invaded God's turf.
I'm not a big fan of the Turing Test (which is analyzed in detail here). I'm sympathetic to the Chinese Room argument that you can replicate speech without creating the sentience behind it. I lean toward the idea that most speech is just an output for the human species, the way honey is for wasps or webs are for spiders. My first mother-in-law could weave something that looked like a spiderweb, if you asked her nicely, but that didn't make her an arachnid. So if we build an AI – or meet an alien, for that matter – that can speak like a human being, I still won't be completely convinced it has consciousness like ours.
Which gets us to singing. Its main evolutionary purpose seems to be attraction – either sexually, or as a way of establishing trust. Daniel Levitan suggests that singing might have been used to convey honesty when a stranger approached a new community, because the emotion conveyed is more difficult to fake. Maybe that's why Bob Dylan's more popular than Michael Bolton: It's easier to lie with words than music, and the successful transmission of emotion is more important to us than the sweetness of the voice.
So I hereby propose a modification to Turing's test: Instead of asking our entity to speak, let's ask it to sing. If it can make us cry with a sad song, we'll say that it's conscious. And if it can get us aroused – with, say, a new version of “Sexual Healing” – well, then let's just say our experiment could take an unexpected turn.
It's true that all of the arguments against the Turing Test could also be used against this one, so it doesn't really advance the debate very far. But what the hell: At least we might hear a decent song for a change, instead of all the crap they've been playing lately.