# Monday, January 05, 2015

### The Imitation Game

**by Jonathan Kujawa**

In *The Imitation Game* Benedict Cumberbatch plays the amazing, fascinating, and ultimately tragic Alan Turing. I haven't seen it yet, but the reviews are good and it is bound to be up for a bunch of awards. It certainly does a thorough job of covering the Oscar checklist: Historical setting? Public and personal drama? English accents? Beating the Nazis? Check 'em all!

The basic storyline should be well known to all. Turing was an instrumental part of the UK's successful efforts to break the "unbreakable" German Enigma code. The work of the codebreakers at Bletchley Park is widely acknowledged to have shortened World War II by several years and, consequently, to have saved many thousands of lives. The tragedy of the story is that Turing was convicted in 1952 of "gross indecency" for having a relationship with another man (Arnold Murray). Turing was given the option of either prison or hormone treatments (i.e., chemical castration). He chose the latter. In 1954 Turing died of cyanide poisoning at the age of 41 in what was ruled a suicide [2].

Appallingly, it wasn't until 2009 that the UK government apologized for its treatment of Turing. And it wasn't until public pressure during the 2012 centennial of Turing's birth that they considered a pardon. He was finally pardoned on December 24, 2013.

If there was ever a place where one's sexual preferences, gender, race, economic status, religious beliefs, what-have-you should be irrelevant, it should be mathematics. After all, the Pythagorean Theorem doesn't give a hoot about such things. But mathematics is a human endeavor and humans can't seem to help but care about these things. I recently read a quote by Chimamanda Ngozi Adihie which put this nicely:

The problem with gender is that it prescribes how we should be, rather than recognizing how we are.

Things continue to improve [3]. It's heartening to note that Turing's treatment by the UK government is absolutely shocking now. And ten years ago I couldn't imagine that gay marriage would now be widespread in the US and on the verge of becoming universal [4].

For mathematicians Turing's place in history was guaranteed long before he helped beat the Nazis. At the age of 24 Turing answered Hilbert's famous "Entscheidungsproblem" (Decision Problem). More importantly, he did it by inventing theoretical computer science; and did it before there were computers!

Hilbert asked the following the following rather innocuous question: Is there a step-by-step procedure which can decisively determine if any given mathematical statement is true or false? More precisely, is there an algorithm which can decide if a given statement can be proven using the rules of first order logic?

It is worth noting that Hilbert is not asking that the algorithm provide the proof of the statement's truth or falsehood. He is only asking for the algorithm to give the final answer. Of course if you could provide an algorithm which also provides the proof along with the answer, then all the better.

In 1936 Turing and, separately, Church, proved that no such algorithm exists! If it did, then we could dream of building a magic machine which would tell us if the Riemann Hypothesis is true. Turing-Church tells us that no such magic machine exists. As a mathematician by trade, I like to think this means my job can't be lost to automation.

So how do you go about showing there is no such algorithm? Perhaps we just haven't been clever enough or tried a sufficiently elaborate algorithm. Maybe it's just around the corner. There are infinitely many possible algorithms to consider. How can we possibly eliminate them all?

The first thing one needs to do is make precise what it means to be able to do something algorithmically. Intuitively, an algorithm is a specific list of instructions which you can mindlessly follow like an especially detailed recipe. Returning to WWII, imagine all the complicated, repetitive calculations the folks at Bletchley Park had to do day after day in their study of the Enigma machine. You need those calculations done quickly and accurately by the thousands of women at Bletchley Park who were the original "computers". To ensure this they need detailed, step-by-step instructions which anyone can follow. Such things are what we'd call an algorithm. That rough and ready idea is good enough for wartime, but too imprecise to study mathematically.

Church formalized the notion of an algorithm or, equivalently, a computation in formal logic using his theory of Lambda Calculus. Turing, however, had the brilliant insight to take the machine metaphor seriously. To him an algorithm was something which could be performed by a machine. At first glance you are no better off. Instead of the abstract problem of considering all possible algorithms, you now have the practical (but at least as difficult) problem of considering all possible machines.

Turing's second insight was that you could build a Universal Machine. This Universal Machine could do any calculation done by a bespoke computing machine. That is, if you like, the Universal Machine can flawlessly imitate any other machine. In particular, if you could give an example of a problem which could not be decided to be True or False by the Universal Machine, then that problem couldn't be decided by any machine at all. Instead of checking the problem against the infinitely many possible machines/algorithms, you only have to test it against the Universal Machine.

It is worth stopping to marvel at Turing's idea. Nowadays we are so used to the idea of a computer and its ability to run any software program, it can be nearly impossible to picture a time when such things were beyond imagining.

It is true that Charles Babbage designed his Analytic Engine in the 1830's and 1840's. The equally remarkable Ada Lovelace wrote notes at the time which show that she understood that Babbage's machine was capable of following arbitrarily complex algorithms. Looking back, we see that the Analytic Engine was in fact a Turing Universal Machine.

But in 1936 Babbage's machine still hadn't been built. In true mathematician's fashion, Turing theoretically constructed such a machine and studied it anyway. In Turing's original paper the machine had the following key ingredients:

- A finite sequence of symbols to work with (usually called an alphabet).
- A long strip of paper which can be read, wrote on, and erased.
- A read/write/eraser head which can travel back and forth along the paper strip.
- A list of rules, one for each symbol in the alphabet.

Starting at some point on the paper strip, the machine reads the symbol at that location, looks up the symbol on the list of rules, and then follows the instructions given by that rule (which may involve erasing the symbol, writing a new symbol, and moving one step to the left or right on the strip of paper). The machine then repeats this process at the new location on the paper strip.

Nowadays we would call the the paper tape the memory and the list of rules the programming language. To program Turing's machine you simply write the appropriate string of symbols on the paper tape, set its read/write/erase head into position and let it go to work. For Turing an algorithm is precisely something you can program such a machine to do. You can see a living, working Turing Machine in this video.

Of course you could have a single rule which just writes the same symbol over and over again. That would make for a pretty dumb machine. It takes a little care to have a sufficiently rich list symbols and rules to make it a Universal Machine. Turing gave such a list and showed that the resulting machine would be universal (you can read Turing's paper here). Namely, he showed that given any algorithm you can encode it as a sequence of symbols from the alphabet in such a way that when Turing's Machine follows the instructions laid down on the tape, it exactly evaluates the given algorithm. That is, the Universal Machine can be programed to perfectly imitiate any other machine.

Turing's machine can do everything from adding one and one, to running the software of a smartphone, to rendering the special effects in the latest blockbuster film. Of course in real life it would be excruciatingly slow. But in *theory* everything is possible, and mathematicians live in the land of theory!

My favorite example of a Turing Universal Machine was designed by Alvy Ray Smith and can be found at turingtoys.com.

It prints out as a double sided business card. Imagine, a full fledged computer you can use as a bookmark, to wedge under the leg of a wobbly table, to play Candy Crush, or to simulate global climate change over the past century. The applications are endless [5]! The only real difference between this business card and a supercomputer is one of degree. Supercomputers may run slightly faster, but with patience my card can do whatever they can do!

How then does Turing answer Hilbert's Entscheidungsproblem? Using his Universal Machine Turing shows that if there is an algorithm which can decide the veracity of any given logical statement, then there is a process which can determine if any given computing machine will write a zero (in Turing's paper zero and one are always part of the alphabet). Turing then shows there is no such process using an argument which will be familiar to fans of Cantor's Diagonalization argument.

[1] From Wikipedia.

[2] Apparently this is not entirely settled and some believe his death was accidental and a few think murder!

[3] Not everywhere, sadly.

[4] Although some states, like my home state of Oklahoma, will kick and scream all the way!

[5] Some may be easier to implement than others.

Posted by Jon Kujawa at 12:35 AM | Permalink