Algorithms and The Meaning of Explanation

by Daniel Ranard

Binary

We're surrounded by algorithms. Facebook uses an algorithm to curate your newsfeed, credit agencies use an algorithm to compute your credit score, and soon an algorithm may replace you in the driver's seat. As algorithms come to dictate larger swaths of life, it's important to understand exactly what they are, and especially how they're drastically changing.

On one hand, algorithms are nothing new. An algorithm is just a precise set of instructions for carrying out a task. Any chef with a cookbook already follows the written algorithms within: add two cups of water, mix until smooth. Unlike computer algorithms, these instructions are written in a language meant for humans, so they still retain some ambiguity: how should you go about lifting the cup, and what does it mean to mix until smooth? While most of us have the know-how to surmount these ambiguities, computers are designed to follow much more precise instructions. And unlike cookbook recipes, computer algorithms don't concern the manipulation of physical objects like cups and bowls, but rather the manipulation of abstract objects like numbers and bits. A computer algorithm might say: "Take two numbers as inputs, multiply the larger one by seven, then add them," and so on. Though a modern computer is designed to convert these instructions into physical manipulations of the electricity within, one might also use an abacus, or pen and paper. Indeed, some of today's algorithms have been around for centuries: the way a computer calculates square roots is not so different from the method prescribed by Hero of Alexandria.

So while we built bigger and better equipment to execute our algorithms in the 20th century, the central idea of the algorithm had remained unchanged for millenia. What would algorithms of the future hold? First, let's take a diversion and explain where we thought the most advanced algorithms were likely headed, decades ago. Then, we'll see how the recent success of machine learning methods has changed that vision for many, even posing new questions about the nature of knowledge and explanation.

For a while it appeared that the most sophisticated algorithms of the future would be distinguished by their increased abstraction, in a particular sense of the word used by computer scientists. To see what this means, imagine a computer programmer wants the computer to complete a task that involves, as some intermediate step, finding the square root of several numbers. However, imagine she writes her computer code in some programming language (call it "Language 1") that only allows commands involving addition, subtraction, multiplication, and division. Then she can't type "take the square root" in her code, because the machine won't know how to handle that instruction. Instead, she writes out a list of commands for iterated addition and division, using the clever algorithm by Hero of Alexandria that ultimately finds the square root.

That works, but she gets tired of re-writing Hero's algorithm each time she needs the computer to take a square root. So she develops a new programming language (call it "Language 2"), which also includes the command "take the square root." Then she writes a separate program in Language 1, a special translation program, which takes programs written in Langauge 2 and converts them to Language 1, for instance by inserting Hero's algorithm whenever it encounters the command "take the square root." Now, if another programmer wants to write a program in Language 2, he won't even need to know about Hero's algorithm: he can just write "take the square root" and carry on. In that sense, the process of taking the square root has been "abstracted" in Language 2, because the concrete details of the process may be ignored.

Language 2 could make use of many abstracted commands, or even a very different syntax. Then we could develop a Language 3, and so on; these languages are called "higher level." For instance, the popular language "C" is higher-level than the earliest programming languages. Meanwhile, our hypothetical Language 1 was already somewhat abstract: the imagined programmer could use commands like "add these two numbers," without worrying about how the addition was actually implemented in the physical circuitry. The work of the modern programmer rests crucially on this tower of abstraction.

A few decades ago, many computer scientists thought the future of computer technology would be somewhere high up on this tower of abstraction. We would write code that was nearly in English, and we would build computers with abstract language and problem-solving skills. We imagined writing algorithms that would more closely match the conscious stream of reasoning that fuels – or appears to fuel – our own thoughts. That's what we intended to call artificial intelligence.

But that's not quite what happened. These days, state-of-the-art algorithms for many tasks look a lot different than we expected. They don't rely on cleverly designed mathematical formulas, or programming languages that feel like English. They don't emulate our conscious thoughts, or form analogies, or manipulate language. In some sense we don't know what they do at all. We just know they work.

Here's how one of these modern machine-learning algorithms works. Say we want the computer to identify which photographs have cats and which don't. The input to the program is the numerical data representing the photo: the list of numbers quantifying the brightness and color of each pixel. Before describing an algorithm based on machine-learning though, let's think about a more traditional approach. Twenty years ago, you might have written a computer program that first compares all the pixel values, looking for neighboring pixels that have large differences in brightness. Large differences in brightness would indicate an edge, like the border between the cat's black face and the white carpet. Then the program might look for collections of edges that form a circle. The programmer could develop the abstract command "find a circle," using this command to identify circles that might correspond to the eyes and face. Finally, the program might compare the relative positions of the circles using some formula, making a guess about whether it's a cat.

Traditional image-recognition algorithms vary in their exact approach, but they all have a similar flavor to this toy example. And that was the state of the art, until recently. Now let's ask what a machine-learning algorithm would do instead, given the numerical data representing the pixels. The machine-learning algorithm says something like: "Add the values of the first two pixels, then multiply by seven, then subtract the third pixel value…" Wait: this seems like a huge step backward, right? For one, the algorithm only makes use of simple arithmetic operations, rather than taking advantage of more abstract commands like "find a circle." And worse, this algorithm seems impossible to design. How could we possibly determine the right sequence of arithmetic operations to apply to the pixel values, such that the final answer is a 0 or 1 indicating the presence of a cat? In fact, the whole point of using higher levels of abstraction was so that we could avoid attempting to come up with the right sequence of arithmetic operations. Even though the traditional image-recognition algorithm also boils down to a sequence of primitive operations, the programmer writes the algorithm by stringing together simpler algorithms (like finding circles or edges), so that the programmer never needs to write out or even imagine the entire sequence of arithmetic steps.

But I've left out the essential component of the machine-learning algorithm. In truth, we start with less of an algorithm than the hollow shell of an algorithm, a blueprint for an algorithm. It looks something like: "Add the values of the first two pixels, then multiply by X, then subtract the third pixel value, then multiply by Y," and so on. The numbers X and Y are "parameters" that we haven't specified yet. That is, we've written a long sequence of arithmetic operations acting on the inputs, but we've left some of the instructions blank, so the algorithm doesn't make sense until we specify actual value for the parameters X and Y. We could just program the computer to plug in random values for the parameters and then run the algorithm, but the answer wouldn't be any better than a random guess. Nevertheless, that's how we start: the computer uses random values for the parameters.

That's where the learning comes in. The programmer needs hundreds or thousands of examples of photos, both with and without cats – the "training" data. Then she runs the algorithm with randomly chosen parameters, observing how often it correctly identifies the cats in the training data. She slightly tweaks the parameters X, Y, etc., systematically raising and lowering each one by small amounts, until the algorithm works slightly better. The programmer repeats this process and automates it, writing a separate program – the training program – that automatically runs the algorithm, tests its performance on the sample photos, varies the parameters, re-tests, and so on. The training program carries on, tweaking the parameters until the algorithm works.

(The algorithm that's being optimized is often called a "neural network," for reasons that I think serve only to obscure its true nature at this point in the explanation!)

That's what practitioners call "learning": when the training program tweaks the unspecified parameters of the algorithm to optimize the performance on the training data. Eventually, with the right values of the parameters, the optimized algorithm functions incredibly well. But how does the algorithm really work? On one hand, we know exactly how the algorithm works. It does exactly what we told it, following the instructions: "Add the values of the first two pixels, then multiply by X, then…", using whichever values of X, Y, etc. that the training program finally settled on. Ultimately, the computer is performing some long chain of arithmetic operations to identify the cat, and we know exactly what those operations are.

On the other hand, what's the machine-trained algorithm actually doing? It's hard to even pose the question. We already admitted that we do know what the algorithm is doing. We'd just like to explain it better. By which we mean, maybe, that we'd like to understand the algorithm in more human terms? That we'd like to determine its structure? It's hard to say. Somehow, we'd like a description more akin to our description of how the traditional image-recognition algorithm worked: "It finds the edges, then finds the circles, then compares their positions…" But does any "simple" interpretation exist for the algorithms produced by machine learning, even if we allow rough or imprecise descriptions?

In most cases, I think that's somewhat of an open question. If a genius engineer stared long enough at the parameters used in the machine-trained algorithm, would she discover the algorithm is really performing a series of sub-tasks like "find the circle," akin to the sub-tasks of a more traditional, hand-crafted algorithm? In the case of image-recognition, researchers actually have analyzed the machine-trained algorithms, and they can sometimes identify various patterns (like circles) that the algorithm looks for. In many cases, however, researchers can't even begin to describe how a trained algorithm works. And what would it mean to have such a description? Perhaps it would mean having a simpler or compressed description of the trained algorithm, in some quantitative sense. [1]

It may be hard to explain how a machine-trained algorithm works, but should we care, as long as it works well? That's up to you. If an algorithm is driving your car, or looking for a tumor in a medical scan, you might like to have a better idea of its "thought process." For one, the structure of the algorithm will affect how it performs in extreme cases, especially cases very unlike those found in the training data. If the algorithm driving your car has learned to recognize stop signs by looking for octagons rather than looking for the word "stop," you'd like to know that before you run into an oddly-shaped stop sign. The problems multiply when you consider issues like liability, ethics, and machine-learned biases. (For more discussion of the problems raised by machine learning in medicine, you can read the wonderful article by Siddhartha Mukherjee.)

Meanwhile, if we have a machine-trained algorithm to identify cats, have we obtained any knowledge about finding cats in photos? Would knowing the machine-trained algorithm also help us identify dogs? (Yes, in some cases, with what's called "transfer learning.") Perhaps what we mean by knowledge requires generalizability. Or perhaps some tasks require extremely specialized "knowledge," such that the best possible algorithm can't be regarded as the composition of more generalized components. (You may also like also this article by David Weinberger on knowledge in artificial intelligence.)

You could even ask whether machine-trained algorithms are "algorithms" at all. Sure, they specify precise instructions for computation. But the list of instructions may be arbitrarily complicated, wild and inscrutable to the human mind. Should we call any such process an algorithm, just because it's rule-based? Then virtually all processes are algorithms, if you believe that atoms follow the laws of physics. But it means little to say that the atoms in a gas are following an algorithm. Will machine learning, in some sense, hail the death of the algorithm?

Meanwhile, if we didn't know what today's artificial intelligence would look like in 1990, maybe we can't know how it will look in 2050. After all, the best algorithms don't always turn out how we expect.

[1] One might propose propose an experiment: Consider a computer file that lists all the parameters of the algorithm. As the you train the algorithm and update the parameters, does the computer file listing the parameters become more easily compressible (say, using some file compression program)? Maybe it's a bit far-fetched to guess "yes," but it opens an interesting line of questions, which may well exist in the literature. Here's another question: if you performed some compression on the parameters, and then re-trained the network starting from those values, could you achieve better results, having enforced some sort of uniformity?