By Namit Arora
As a graduate student of computer engineering in the early 90s, I recall impassioned late night debates on whether machines can ever be intelligent—intelligent, as in mimicking the cognition, common sense, and problem-solving skills of ordinary humans. Scientists and bearded philosophers spoke of ‘humanoid robots.’ Neural network research was hot and one of my professors was a star in the field. A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.
I argued out of intuition, from a sense of the immersive nature of our life: how much we subconsciously acquire and call upon to get through life; how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted Turing test? How could a machine that did not care about its existence as humans do, ever behave as humans do? Can a machine become socially and emotionally intelligent like us without viscerally knowing infatuation, joy, loss, suffering, the fear of death and disease? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.
My interlocutors countered that while extremely complex, the human brain is clearly an instance of matter, amenable to the laws of physics. They posited a reductionist and computational approach to the brain that many, including Steven Pinker and Daniel Dennett, continue to champion today. Our intelligence, and everything else that informed our being in the world, had to be somehow ‘coded’ in our brain’s circuitry, including the great many symbols, rules, and associations we relied on to get through a typical day. Was there any reason why we couldn’t ‘decode’ this, and reproduce intelligence in a machine some day? Couldn’t a future supercomputer mimic our entire neural circuitry and be as smart as us? Recently, Dennett declared in his sonorous voice, “We are robots made of robots made of robots made of robots.”
Today’s supercomputers are ten million times faster than those of the early 90s. But despite the big advances in computing, AI has fallen woefully short of its ambition and hype. Instead, we have “expert” systems that process predetermined inputs in specific domains, perform pattern matching and database lookups, and algorithmically learn to adapt their outputs. Examples include chess software, search engines, speech recognition, industrial and service robots, and traffic and weather forecasting systems. Machines have done well with a great many tasks that we ourselves can, or already do, pursue algorithmically—including many yet unbeknown to us—as in searching for the word “ersatz” in an essay, making cappuccino, restacking books in a library, navigating our car in a city, or landing a plane. But so much else that defines our intelligence remains well beyond machines—such as projecting our creativity and imagination to understand new contexts and their significance, or figuring out how and why new sensory stimuli are relevant or not. Why is AI in such a brain-dead state? Is there any hope for it? Let’s take a closer look.
René Descartes, who held that science and math would one day explain everything in nature, understood the world as a set of meaningless facts to which the mind assigned values. Early AI researchers accepted Descartes’ mental representations, embraced Hobbes’ view that reasoning was calculating, Leibniz’s idea that all knowledge could be expressed as a set of primitives, and Kant’s belief that all concepts were rules. At the heart of Western rationalist metaphysics—which shares a remarkable continuity with ancient Greek and Christian metaphysics—lay the Cartesian mind-body dualism. This became the dominant inspiration for early AI research.
Early researchers pursued what is now known as ‘symbolic AI.’ They assumed that our brain stored discrete thoughts, ideas, and memories at discrete points, and that information is “found” rather than “evoked” by humans. In other words, the brain was a repository of symbols and rules which mapped the external world into neural pulses. And so the problem of creating AI boiled down to creating a gigantic knowledge base with efficient indexing, i.e., a search engine extraordinaire. That is, the researchers thought that a machine could be made as smart as a human by storing context-free facts and rules which would reduce the search space effectively. Marvin Minsky of MIT AI lab went as far as claiming that our common sense could be produced in machines by encoding ten million facts about objects and their functions.
It is one thing to feed in millions of facts and rules into a computer, another to get it to recognize their significance and relevance. The ‘frame problem,’ as this problem is called, eventually became insurmountable for the ‘symbolic AI’ research paradigm. One critic, Professor Hubert L. Dreyfus, expressed the problem thus:
If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated? 
GOFAI — Good Old Fashioned Artificial Intelligence — as symbolic AI came to be called, soon turned into a degenerative research program. It is unsettling to think how many prominent scientists and philosophers held (and continue to hold) such naïve assumptions about how human minds operate. A few tried to understand what went wrong and looked for a new paradigm for AI. No longer could they ignore the withering critiques of their work by Professor Dreyfus, who drew inspiration from the radical ideas of the German philosopher Martin Heidegger (1889-1976). It began dawning on them that humans were far more complex than they had earlier allowed for, with our subconscious familiarity and skillful coping with the world, nonlinear decision-making, ability to assess and adapt to new situations, and the role of things like purpose, intention, and creativity that shaped, and were shaped by our organization of the world.
A hammer, Heidegger pointed out, cannot be represented by just its physical features and function, detached from its relationship to nails and the anvil, the experience and skill in hammering of the person using it, the hammer's role in building fine furniture and comfortable houses, etc. Merely associating facts, values or function with objects cannot capture the human idea of an object, with its particular role in the meaningful organization of the world as we experience it. As Professor William Blattner writes in Heidegger's Being and Time (2006), “Heidegger argues that meaningful human activity, language, and the artifacts and paraphernalia of our world not only make sense in terms of their concrete social and cultural contexts, but also are what they are in terms of that context.”.
Consider hi fi speakers. One way to represent them, in the manner of rationalists, is as objects with physical properties—shape, dimensions, color, material, attached wires—to which are then assigned a value or function. But this is not how we actually experience music speakers. We experience them as inseparable from the act of listening to music, from the ambience they add to our living room, from their impact on our mood, and so on. We do not understand them as context-free, object-value pairs; we understand them through our context-laden use of them. When someone asks us to describe our speakers, we have to pause and think about their physical attributes.
According to Heidegger, writes Professor William Blattner:
The philosophical tradition has misunderstood human experience by imposing a subject-object schema upon it. The individual human being has traditionally been understood as a rational animal, that is, an animal with cognitive powers, in particular the power to represent the world around it … the notion that human beings are persons and that persons are centers of subjective experience has been broadly accepted … Where the tradition has gone wrong is that it has interpreted subjectivity in a specific way, by means of concepts of ‘inner’ and ‘outer,’ ‘representation’ and ‘object’ … [which] dominates modern philosophy, from Descartes through Kant through Husserl. 
So in many ways, Heidegger stood opposed to the entire edifice of Western philosophy. According to him, the Western philosophical tradition “has been focused on self-consciousness and moral accountability, in which we experience ourselves as distinct from the world and others.” Such ‘subject-object dualism’ dominates modern science, but fails to describe how humans relate to the world in their experience of it, which is quite holistic. Heidegger claimed that the subject-object model of experience, in which we see ourselves as distinct from the world and others, “does not do justice to our experience, that it forces us to describe our experience in awkward ways, and places the emphasis in our philosophical inquiries on abstract concerns and considerations remote from our everyday lives.” As Heidegger contends, “we are disclosed to ourselves more fundamentally than in cognitive self-awareness or moral accountability. … Our being is an issue for us, an issue we are constantly addressing by living forward into a life that matters to us.” For Heidegger, our being in the world is “more basic than thinking and solving problems; it is not representational at all.” For instance, when we are absorbed in work, using familiar pieces of equipment, “the distinction between us and our equipment—between inner and outer— vanishes.” Or as Prof Blattner says,
[Heidegger] argues that our fundamental experience of the world is one of familiarity. We do not normally experience ourselves as subjects standing over against an object, but rather as at home in a world we already understand. We act in a world in which we are immersed. We are not just absorbed in the world, but our sense of identity, of who we are, cannot be disentangled from the world around us. We are what matters to us in our living; we are implicated in the world. 
In other words, it makes no sense to believe that our minds are built on basic, atomic, context-free sets of facts and rules, objects and predicates, and discrete storage and processing units. This is why the methods of natural science, which look for structural primitives such as particles and forces, fail to describe our experience. Therefore, contrary to the implicit beliefs of much Western philosophy and AI research, a ‘computational’ theory of the mind may be impossible. Isn’t our common sense “a combination of skills, practices, discriminations, etc., which are not intentional states, and so, a fortiori, do not have any representational content to be explicated in terms of elements and rules?”  The older Wittgenstein agreed, adding in Last Writings on the Philosophy of Psychology (1948): “[N]othing seems more possible to me than that people some day will come to the definite opinion that there is no copy in the … nervous system which corresponds to a particular thought, or a particular idea, or [a particular] memory.”
A conceptual advance for AI came when some researchers recognized that a computer’s model of the world was not real. By comparison, the human ‘model’ of the world was the world itself, not a static description of it. What if a robot too used the world as its model, “continually referring to its sensors rather than to an internal world model”?  However, this approach worked only in micro-environments with a limited set of features which could be recognized by its sensors. The robots did nothing more sophisticated than ants. As in the past, no one knew how to make the robots learn, or respond to a change in context or significance. This was the backdrop against which AI researchers began turning away from symbolic AI to simulated neural networks, with their promise of self-learning and establishing relevance. Slowly but surely, the AI community began embracing Heideggerean insights about consciousness.
Starting with a blank slate (unlike humans), machine neural networks attempt to simulate biological brains using a connectionist approach capable of continually adapting its structure based on what it processes and learns. In symbolic AI, a feature “is either present or not. In the [neural] net, however, although certain nodes are more active when a certain feature is present in the domain, the amount of activity varies not just with the presence or absence of this feature, but is affected by the presence or absence of other features as well.”  Here, learning is guided using one of three paradigms: supervised learning in controlled domains, unsupervised learning using cost-benefit heuristics, or reinforcement learning based on optimizing certain outcomes.
But the results are not promising. Supervised learning, for instance, remains mired in very basic problems—such as the neural net’s inability to generalize predictably in terms of categories intended by the trainer (except for toy problems which leave little room for ambiguity). For example, a net trained to recognize palm trees in photos taken on a sunny afternoon may learn to pick them out by generalizing on their shadows, and thus fail to detect any trees in photos from an overcast day. The sample size can be enlarged but the point is that the trainer doesn’t know what the net is precisely training itself to do. Another neural net trained to recognize speech may crash when it encounters a metaphor—say, “Sally is a block of ice.”  Outside its training domain, the net is also unable to recognize other contexts, and therefore cannot know when it is not appropriate to apply what it has learned—problems that humans dynamically solve using their social skills, biological imperatives, imagination, etc.
Reinforcement learning has its own pitfalls. For instance, what is an objective measure of reinforcement? Even if we take a simplistic view that humans act to maximize “satisfaction” and assign a “satisfaction score” to all foreseeable outcomes, we need some way to model and artificially reproduce how “satisfaction” may be impacted by our moods, desires, body aches, etc., as well as their modeling their correlation with inputs in a diversity of situations (weather, familiar faces, noise, motion, etc.). But does anyone know what ‘model rules’, if any, humans obey in their daily behavior? Dreyfus sums it up:
“Perhaps a [simulated neural] net … If it is to learn from its own “experiences” to make associations that are human-like rather than be taught to make associations which have been specified by its trainer, it must also share our sense of appropriateness of outputs, and this means it must share our needs, desires, and emotions and have a human-like body with the same physical movements, abilities and possible injuries.” 
In other words, the success of neural nets will depend not only on our understanding of how we breathe significance and meaning into our world (which was Heidegger’s endeavor), and finding a way to capture this understanding in the language of machines: in order to have a shot at behaving like humans, these nets also need to come into a social world similar to that of humans and project themselves in time the way humans do with their physical bodies. How to achieve any of this is not even remotely clear to anyone, nor is it clear that these things are even amenable to modeling on digital computers. To insist otherwise is not only an article of faith, it also seems to me increasingly obtuse and wild. 
Notes & Bibliography:
 Hubert L. Dreyfus, “Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian ,” 2006.
 William Blattner, “Heidegger’s Being and Time,” Continuum, 2006, p.9.
 ibid., p.4-5.
 ibid., p.48.
 ibid., p.12.
 Hubert L. Dreyfus, “What Computers Still Can’t Do: A Critique of Artificial Reason,” MIT Press, 1992.
 Hubert L. Dreyfus and Stuart E. Dreyfus, “Making a Mind vs. Modeling the Brain: AI Back at a Branchpoint,” UC Berkeley.
 Think Ray Kurzweil, Nick Bostrom, and Bill Joy, with their fantasies of the technological singularity, mind uploading, etc.
 Jonathan Ree, “Heidegger,” Routledge, 1999.
 Ari N. Schulman, “Why Minds Are Not Like Computers,” The New Atlantis, Number 23, Winter 2009, pp. 46-68.
More writing by Namit Arora?