Kind Of Like A Metaphor

by Misha Lepetic

“I got my own pure little bangtail mind and
the confines of its binding please me yet.”
~ Neal Cassady, letter to Jack Kerouac

Mountain-01One of the curious phenomena that computing in general, and artificial intelligence in particular, has emphasized is our inevitable commitment to metaphor as a way of understanding the world. Actually, it is even more ingrained than that: one could argue that metaphor, quite literally, is our way of being in the world. A mountain may or may not be a mountain before we name it – it may not even be a mountain until we name it (for example, at what point, either temporally or spatially, does it become, or cease to be, a mountain?). But it will inhabit its ‘mountain-ness' whether or not we choose to name it as such. The same goes for microbes, or the mating dance of a bird of paradise. In this sense, the material world existed, in some way or other, prior to our linguistic entrance, and these same things will continue to exist following our exit.

But what of the things that we make? Wouldn't these things somehow be more amenable to a more purely literal description? After all, we made them, so we should be able to say exactly what these things are or do, without having to resort to some external referents. Except we can't. And even more troubling (perhaps) is the fact that the more complex and representative these systems become, the more irrevocably entangled in metaphor do we find ourselves.

In a recent Aeon essay, Robert Epstein briefly guides us through a history of metaphors for how our brains allegedly work. The various models are rather diverse, ranging from hydraulics to mechanics to electricity to “information processing”, whatever that is. However, there is a common theme, which I'll state with nearly the force and certainty of a theorem: the brain is really complicated, so take the most complicated thing that we can imagine, whether it is a product of our own ingenuity or not, and make that the model by which we explain the brain. For Epstein – and he is merely recording a fact here – this is why we have been laboring under the metaphor of brain-as-a-computer for the past half-century.

But there is a difference between using a metaphor as a shorthand description, and its broader, more pervasive use as a guide for understanding and action. In a 2013 talk, Hamid Ekbia of Indiana University gives the example of the term ‘fatigue' used in relation to materials. Strictly speaking, ‘fatigue' is “the weakening of a material caused by repeatedly applied loads. It is the progressive and localised structural damage that occurs when a material is subjected to cyclic loading.” (I generally don't like linking to Wikipedia but in this instance the banality of the choice serves to underline the point). Now, for materials scientists and structural engineers, the term is an explicit, well-bounded shorthand. One doesn't have pity for the material in question; perhaps a poet would describe an old bridge's girders as ‘weary' but to an engineer those girders are either fatigued, or they are not. Once they are fatigued, no amount of beauty rest will assist them in recuperating their former, sturdy (let alone ‘well-rested' or ‘healthy') state.

The term ‘fatigue' is furtherly instructive because it illustrates the process by which metaphor spills out into the world. If a group of engineers are having a discussion around an instance of ‘fatigue' their use of the term in conversation is precise and understood. This is a consequence of the consistency of their training just as much as it's relevance to the physical phenomenon. After all, it's easier to say “the material is fatigued” than “the material has been weakened by the repeated application of loads, etc.” But the integrity of a one-to-one relationship between a word and its explanation comes under pressure (so to speak) when this same group of experts presents its findings to a group of non-experts, such as politicians or citizens. Of course, taken by itself, the transition of a phrase such as ‘fatigue' does not have overly dramatic implications. What it does do, however, is invite the dissemination of other, adjacent metaphors into the conversation. Soon enough ‘fatigue', however rigorously defined, accumulates into declarations of the ‘exhausted' state of our nation's ‘ailing' infrastructure. There are no technical equivalents to these terms, which call us to action by insinuating that objects like roads and tunnels may be feeling pain, whereas at best we are the recipients of said suffering.

*

Mountains-in-iceland-8Intriguingly, the complexity of this semiotic opportunism ramps up quickly and considerably. Roads and bridges may be things that we have built, but they still exist in the world, and will continue to exist whether we fix them or not. They may remind us of our success or inadequacy, but their intended purpose is almost never unclear. On the other hand, there are other things that we have built, things that exist in a much more precarious sense – it may even be a stretch to call them objects – and whose success qua objects is also much more variable. This is where we find computation, software and artificial intelligence.

The purpose of computation, broadly speaking, is to perform an action – some kind of service, or analysis, that may or may not be regular (in the sense that it can be anticipated) and is rarely, if ever, regulated. In the world of infrastructure, you either make it across the bridge or you don't, and there are regulations meant to ensure a positive outcome. As Yoda advises, “Do or do not. There is no try.” But computation is different. I am not talking about something linear, like programming a computer to add two numbers. With a search engine, for example, you may find the information or not; or what you find may be good enough, or you may think it's good enough but it's really not, and you'll never know. The service, or rather the experience of the service, becomes the object; the code, which is perhaps the true object, is obscured from your view. And we tend to be poor at processing this kind of ambiguity, and when faced with ambiguity we reach for metaphor as a sense-making bulwark against the messiness of the unknown.

As we expect more of our computing technologies, the ensuing purposes also shift temporally. Our software models the world around us, and the way in which we inhabit the world. As such, its utility is displaced into the future: we value it for its predictive nature. We want it to anticipate not simply what we need right now (let alone what we needed yesterday) but what we might want tomorrow, or six months from now. At this point we find ourselves squarely in a place of mind. That is, we expect our inventions to become extensions of ourselves, because we cannot seem to make the leap that something non-human can have any chance of assisting us at being better humans. Software (and specifically AI) is singularly pure in this regard, although traces already exist in previous technologies. So while we don't worry about making our bridges anything more than functional and, somewhat secondarily, aesthetically pleasing, we tend to additionally attribute human-like traits to ships, perhaps because we perceive our lives as much more committed to the latter's successful functioning. But while we may ascribe personality to ships, we go a step further and come to expect intelligence of the software that we make: witness the proliferation of chatbots and personal assistants, to the point that we can now consult articles about why chatbot etiquette may be important.

*

Mountain-goats-869176_960_720In the meantime, these technologies themselves are being generated via metaphor. After all, these are exceedingly complex pieces of software, designed, implemented and refined by hundreds of software engineers and other staff. It is inevitable that there should be philosophies that guide these efforts. According to Ekbia, every one of the ‘approaches' is fundamentally metaphorical in nature. That is, if you decide you're going to write software that will appear intelligent to its users, you have to put a stake in the ground as to what intelligence is, or at least how it is come by. And since we haven't really figured out how intelligence arises within ourselves to begin with, we wind up with a series of investments in a mutually exclusive array of metaphors.

Is intelligence symbolic, and therefore symbolically computable? People like Stephen Wolfram would say yes. Or perhaps intelligence arises if you have enough facts and ways to relate those facts; in which case Cyc and other expert systems are your ticket. Another approach to modeling intelligence has been getting the most press lately: the idea of reinforcement learning of neural networks. (Of course, this last one models how neurons work together within our own brains, so it is a double metaphor.)

The point is that all of these ‘approaches' are metaphorical in substance. We still have not been able to resolve the mind-body problem, or how consciousness somehow arises from the mass of neurons that are discrete, physical entities beholden to well-documented laws of nature. And even though lots of theories of mind have been disproven, the fact that we cannot agree on the nature of intelligence for ourselves implies that any idea of what a constructed intelligence may be is, by definition, a metaphor for something else. Science can avail itself of the luxury of not-knowing, of being able to say, “We are fairly certain that we know this much but no more, and these theories may or may not help us to push farther, but they also may fall apart and we'll have to start over”. Technology, on the other hand, must deliver a solution – something that works from end to end. In the case of AI, where models must be robust, predictive and productive, the designers of a constructed intelligence cannot say, “Well, we know this much and the rest happens without us understanding it.” Your respect for the truth results in no product, and a lot of angry investors. So metaphor in this sense is not a philosophical luxury, it's how you're able to ship any code at all.

Where things get really interesting in this kind of a world is when the metaphors start getting good at producing results. So now we find ourselves in a very weird situation. There are competing metaphors out there in the computational wild: symbolic, expert, neural network systems, as well as others. Increasingly, hybrid systems are also appearing. What if some or even all of these approaches succeed in functioning 'intelligently'? I have to put the word in quotes here, because it's pretty clear that, without a mutually agreed-upon anchoring definition, we have ventured into some very murky waters. These waters are made all the more turbulent because technology's need to solve problems for us (or perhaps to also create them) will continue to push what we consider as viably or usefully 'intelligent'.

The fact is that no AI outfit or its investors will sit around waiting for the scientific community to settle on a model for cognition and then proceed to build products consistent with that model. The truth is nice, but there are (market) demands that need to be met now. If science can supply industry with signposts on how to build better technology, great. At the same time, if the product solves the clients' or users' problems then who cares if it's really intelligent or not? Recall the old adage: Nothing succeeds like success. The tricky bit is that, with enough such success, our very definition of what is intelligent may be on the verge of shifting. Next month I'll look at the implications of living in a world awash in these kinds of feedback loops.