Why some neuroscientists call consciousness “the c-word”

by Yohan J. John

BradioAs a neuroscientist, I am frequently asked about consciousness. In academic discourse, the celebrated problem of consciousness is often divided into two parts: the "Easy Problem" involves identifying the processes in the brain that correlate with particular conscious experiences. The "Hard Problem" involves murkier questions: what are conscious experiences, and why do they exist at all? This neat separation into Easy and Hard problems, which comes courtesy the Australian philosopher David Chalmers, seems to indicate a division of labor. The neuroscientists, neurologists and psychologists can, at least in principle, systematically uncover the neural correlates of consciousness. Most of them agree that calling this the "Easy Problem" somewhat underestimates the theoretical and experimental challenges involved. It may not be the Hard Problem, but at the very least it's A Rather Hard Problem. And many philosophers and scientists think that the Hard Problem may well be a non-problem, or, as Ludwig Wittgenstein might have said, the kind of problem that philosophers typically devise in order to maximize unsolvability.

One might assume that as a neuroscientist, I should be gung-ho to prove the imperious philosophers wrong, and to defend the belief that science can solve any sort of problem one might throw at it: hard, soft, or half-baked. But I have become increasingly convinced that science is severely limited in what it can say about consciousness. In a very important sense, consciousness is invisible to science.

The word "consciousness" means different things to different people, so it might help to cover some of the typical ways its used. The most objective notion of consciousness arises in the world of medicine. We don't usually require a degree in philosophy to tell when a person is conscious and when they are unconscious. The conscious/unconscious distinction is only loosely related to subjective experience: we say a person is unconscious if they are unresponsive to stimuli. These stimuli may come from outside the body, or from the still-mysterious wellspring of dreams.

But the interesting thing about any "medical" definition of consciousness is that it evolves with technology.

Consider locked-in syndrome. Some locked-in patients can move only their eyelids, but this allows them to communicate with the outside world. Others have a more severe version of the condition, which until relatively recently was indistinguishable from a coma. The ability to record neural activity has revealed that some apparently comatose patients are in fact aware of the outside world. By manipulating their own brain activity as it is being recorded, locked-in patients can communicate once again. So responsiveness, the best practical notion of consciousness, is contingent upon our ever-evolving ability to interface with a brain. Perhaps as brain scanning technology improves, we will find that many apparently "brain dead" people are capable of some degree of response.

But responsiveness is an objective way to think about consciousness, and completely avoids what many would regard as the essential feature of consciousness: its subjectivity. Julian Jaynes, in his book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, has perhaps the most evocative description of our subjective reality:

"O, WHAT A WORLD of unseen visions and heard silences, this insubstantial country of the mind! What ineffable essences, these touchless rememberings and unshowable reveries! And the privacy of it all! A secret theater of speechless monologue and prevenient counsel, an invisible mansion of all moods, musings, and mysteries, an infinite resort of disappointments and discoveries. A whole kingdom where each of us reigns reclusively alone, questioning what we will, commanding what we can. A hidden hermitage where we may study out the troubled book of what we have done and yet may do. An introcosm that is more myself than anything I can find in a mirror. This consciousness that is myself of selves, that is everything, and yet nothing at all一 what is it?
And where did it come from?
And why?"

The responsivity-based idea of consciousness is almost comically ill-equipped to shed light on Jaynes's "secret theater". To start with, there is no strong reason to insist that an inability to respond to something implies a lack of subjective experience. This is best understood by thinking about dreams. We associate rapid-eye-movement (REM) sleep with dreaming, because if you wake a person up during REM sleep, they are normally capable of reporting on the dream they just awoke from. If you wake a person up from deep sleep, they have no sense of having been interrupted mid-dream. We infer from this that no dream was therefore ongoing. But what if there really is a subjective experience that accompanies deep sleep? One that simply leaves no trace on memory, and therefore precludes the possibility of a subsequent report? Could we use science or logic to rule out this possibility? Absense of evidence of subjectivity is not evidence of absence, as is the case with any locked-in patient who does not have access to brain scanning technology.

Science is the study of objective phenomena. And by this we mean phenomena that are manifest regardless of who is observing them. Thus the sun is objective because multiple people can testify to its various properties. But subjective experience is by definition not objective: I only have experience of my own consciousness and not anyone else's. The evidence for consciousness in others comes from anatomy — the structure of an organism — and from physiology and behavior — the actions and reactions of organisms and their organs. From these I infer a state of mind, partly by analogy with myself. In The Merchant of Venice Shylock demonstrates how an analogy-based empathy ought to work:

"[…] I am a Jew. Hath
not a Jew eyes? hath not a Jew hands, organs,
dimensions, senses, affections, passions? fed with
the same food, hurt with the same weapons, subject
to the same diseases, healed by the same means,
warmed and cooled by the same winter and summer, as
a Christian is? If you prick us, do we not bleed?
if you tickle us, do we not laugh? if you poison
us, do we not die? and if you wrong us, shall we not
revenge? If we are like you in the rest, we will
resemble you in that."

Shylock's analogy is based on two kinds of similarity: neuroscientists might describe them as structural and functional analogues. Hands, organs, and dimensions are purely structural similarities. The functional similarities are reactions: senses, affections, passions, diseases, bleeding, laughing, death.

Given the limitations of a scientific investigation of pure subjectivity, many scientists and philosophers resort to some version of Shylock's list of similarities. The generosity with which we confer consciousness on other beings depends largely on our criteria for similarity. Perhaps our definitions of consciousness really just reflect the limits of our empathy.

A few years ago, some eminent scientists got together to sign the Cambridge Declaration on Consciousness. They decided that mammals, birds and octopuses had the requisite circuitry and/or responsivity to be included in the exclusive Consciousness Club. They might as well have said "Hath not a bird amygdalae?" or "If you prick an octopus, does it not bleed?"

People who are less interested in building a border wall between the conscious and the non-conscious tend to object along the following lines: "Hath not an insect dimensions, senses, passions?" or "If you poison a plant, does it not die?" With a loose enough criterion of similarity, it is easy to bestow at least some degree of consciousness on all living things.

And isn't this the decent and charitable thing to do? Is there any strong reason to decide that a single-celled organism's striving to survive is not accompanied by feelings, however alien they may be from our own? Many non-western cultures seem to have had little or no trouble assigning souls to all animals, plants and even inanimate objects.

My use of the word 'soul' here might offend certain people's scientific sensibilities. Surely our rational and materialistic investigations of consciousness have little in common with primitive notions of soul or spirit? Many neuroscientists I've interacted with are not so sure: if consciousness is understood as an abstract state or process that can be realized in wildly different structures, it starts to seem decidedly immaterial.

The lurking dualism in consciousness-talk reveals itself most clearly among the nerdier sections of the population: the sci-fi fans who think that consciousness might one day be uploaded into a computer. This technological rapture implicitly assumes that consciousness is a disembodied pattern of information, and therefore independent of the particular material substrate it 'inhabits'. This is consciousness as a Ghost in the Shell. [1]

An age-old philosophical condundrum might detabilize any strong intuitions about the immateriality of consciousness. Consider the Ship of Theseus:

"The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same."

—Plutarch, Theseus

In one variation on the Theseus story, there are two ships: one composed of the old, original planks that were discarded, and another that is the "living" ship, which is continually being repaired. Which is the "real" ship of Theseus? One sort of intuition leads to the idea that the "living" and continually renewed ship is the ship of Theseus.

The duplicate ship of Theseus is a material copy: a perfect clone. Instead of using discarded matter, a clone is a new but indistinguishable configuration of matter. An uploaded 'consciousness' is an informational clone: a copy of the abstract configuration of matter that has been embedded in a totally different material substrate: computer hardware.

In order to assert that an informational clone "is" me, I have to make a metaphysical assumption: that I am nothing more than a pattern of information measured at some time instant. But is this pattern really the basis of my subjectivity? Suppose my clone is brought online while I am still alive. Where is my subjective experience located? Whose eyes do I peer out of? My gut instinct is that the clone is a brand new organism that just happens to be very similar to me: it shares my memories and my tastes, and reacts to stimuli as I might. If consciousness is singular and unified, then can I really convince myself that a copy of me is also me?

Now imagine that the clone is only activated after I die. Death involves a cessation of neural processes. Why should some totally unrelated set of neural processes suddenly become the medium for my experiences? If my clone can be turned on after an arbitrary amount of time, this violates my intuition that consciousness is a continuous process. My neural processes are a causal relay that "carries" my consciousness forward in time, even when I'm in deep sleep. Each material state of my body is causally connected to the last one. By contrast, the clone seems causally disconnected from me.

The metaphysical assumption underlying my intuitions about unity and continuity might be stated as follows. The self is not a configuration of matter measured in some time slice. The self is more like a wave propagating in a medium — it is a process unfolding in time, rather than a static configuration trapped in a frozen instant.

The physicist Richard Feynman captured the dynamic and wavelike nature of the mind beautifully:

“So what is this mind of ours: what are these atoms with consciousness? Last week’s potatoes! They now can remember what was going on in my mind a year ago—a mind which has long ago been replaced. To note that the thing I call my individuality is only a pattern or dance, that is what it means when one discovers how long it takes for the atoms of the brain to be replaced by other atoms. The atoms come into my brain, dance a dance, and then go out—there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.”

A clone or a copy is not carrying on my causal dance. It seems to be starting a brand new dance, albeit one that closely resembles mine.

But perhaps this is just my metaphysical preference? You might prefer to define your consciousness as a configuration that can be copied. Such preferences seem more a matter of taste than science. And it is this very aesthetic freedom that has turned many neuroscientists off the problem of consciousness. A professor in my old department liked to refer to consciousness as the c-word: it was taboo during his lectures, because no one could provide a scientific definition of the term.

A scientific definition has to be more than just science-y sounding. It has to be testable, at least in principle. If we define consciousness in terms of patterns of information, how are we to test whether this is an adequate definition? There are two kinds of things we can measure objectively in organisms: structure, and reactivity. Structure amounts to what the organism looks like, and reactivity amounts to what the organism, or some part of it, does when we interact with it. A definition of consciousness based on structural similarity with the human brain ultimately depends on our standards for similarity. A broad definition might include any sort of organism, and a narrow definition might include only primates. Do we have a strong reason to prefer one or the other?

Another approach is to compare reactions: we could create a clone or a simulated mind, and then see how it responds to our questions or our simulated environments. But even a very simply program can be made responsive to external stimuli: is a Roomba conscious because it 'perceives' the room you put it in and reacts accordingly? Perhaps our machines and algorithms are already conscious, and we just don't acknowledge it yet?

Some scientists are now willing to consider that even simple machines and inanimate objects are 'slightly' conscious. According to integrated information theory (IIT), a recent attempt at a scientific definition of consciousness, a system is conscious if it possesses a degree of 'information integratedness', which is captured by a quantity labeled 'phi'. Given that phi is a continuous quantity rather than a binary 0 or 1, it therefore follows that any system with a little information integratedness, and therefore a non-zero value of phi, is conscious. This leaves open the possibility that even a large extended system, such as the United States, is conscious. [2]

I think that IIT is an illustration of why the Hard Problem of consciousness is more a word game than a real scientific problem. If we go by the IIT definition, we are forced to accept panpsychism: the idea that pretty much everything is at least somewhat conscious. Under more restricted definitions based on 'human-like' structure or function, we might draw a dividing line. But what sort of scientific experiment might tell us one definition is right and another is wrong?

It's hard to imagine a successful scientific program that started with a debate about definition. Newton's laws describe the motions of objects, but they do not start by specifying what exactly an object is. Four centuries after Newton, philosophers don't seem any closer to agreeing on what an object is.

This might sound like a justification for asking scientists to solve the Hard Problem: after all, philosophers seem to derive more pleasure from making up problems and debating them than from actually working towards a consensus. Perhaps the less loquacious approach of the scientist will give us something more solid?

In the case of subjectivity, I very much doubt it. It is one thing to be optimistic about scientific progress. It is another thing entirely to ignore how science actually works. People who are sure that science can define consciousness might learn a lot by first defining what science is.

Imagine the scientific method as a kind of black box that takes in certain inputs and spits out truths, predictions, and useful technologies. Occasionally it also spits out new black boxes: more refined theories. In the case of physics, Newton put in previously existing observations about planetary movements, and also his own findings. What he got out was an elegant mathematical framework that could be used to predict the movements of both celestial and terrestrial bodies.

Note that Newtonian physics did not require definitions of the entities whose movements it explained. If, for example, an early physicist had to explain the movement of Venus, no one would complain that he had failed to explain what Venus was in the first place. For the purposes of celestial mechanics, the observable trajectory of Venus is all that really matters, and all that determines the success or failure of the method. If instead, Venus was a nebulous and widely debated concept that people couldn't point to, the black box of science wouldn't have much to work with.

Clearly there is more to Venus than its trajectory, like what it looks like or how it was formed, but at every stage in the history of our understanding of it, we had a previously-existing working definition of what Venus was, so we could anchor our measuring devices.

So consciousness can't be an input for science, because we don't have a working definition of it. We cannot really, in an objective way, point to a trajectory of someone's consciousness as we might point to the trajectory of Venus.

Could consciousness then be some kind of output of science?

Science often contributes to understanding by proposing a new entity that explains previously mysterious observations. Because these entities are not directly observable, they tend to be controversial when introduced. Ludwig Boltzmann, for example, was ridiculed in the 19th century when he made use of the then-hypothetical notion of atoms as as basis for his theory of statistical mechanics. He was of course vindicated in spectacular fashion eventually, but not before he tragically took his own life. Not all hypothetical entities are found, however. Phlogiston, and more recently, luminiferous ether, have been consigned to the historical dustbin. The most recent discovery that fits into this mold is that of the Higgs boson: the formalism from which it emerged was created to iron out some wrinkles in particle physics.

Imagine some enterprising neuroscientist proposing the existence of a "psychon" ­— the fundamental particle of subjectivity — and then finding it in some Large Neuron Collider. Just as the Higgs boson "explains mass", the psychon would explain consciousness. The analogy with the Higgs boson would break down very quickly, however. No one had any intuitions about the Higgs boson prior to its introduction by Peter Higgs. If the theory predicted a particular set of properties, the big surprise would be if the LHC failed to find those properties. This is not the case with subjectivity. Almost everyone who cares about it seems to have a vague opinion about what it is. They don't know what it is, but they typically know what it is not. We can easily imagine that many of these people, upon examining psychon theory, will say something along the lines of: "Hah! That's not consciousness! It's just some new neuroscientific entity you named. I don't see why this entity confers subjectivity or intentionality onto any organism that displays it."

And since no one can observe subjectivity in any organism other then themselves, the debate will continue in interminable — though occassionally amusing — fashion.

Philosopher Ned Block, upon seeing a presentation on Integrated Information Theory is reported [3] to have remarked:

"You have a theory of something, I am just not sure what it is".

I get the feeling that any scientific theory of consciousness will provoke reactions of this sort. It will definitely be a theory of something, but many of us will be unsure of what that something is.

Luckily this does not mean the end of the road for those of us who are interested in consciousness. After all, science is not the only form of understanding. Who understands a horse more: a biologist who specializes in horses, or a horse whisperer? Or better yet, which sort of understanding would you find more useful? Perhaps, as with horses, the understanding of consciousness that many of us seek might be better thought of as a kind of interactive relationship.

I hope to explore this line of thinking in a future column, but for now I'd like to end by paraphrasing someone who was in no sense a scientist or a philosopher of mind, but definitely played a mind-altering role in history:

"Philosophers have hitherto only defined consciousness in various ways; the point is to change it."

________

Notes & References

[1] A questioner on Quora prompted me to write a detailed critique of the concept of mind-uploading.

[2] Philosopher Eric Schwitzgebel wrote an excellent an amusing essay on this, entitled: If Materialism is True, then the United States is Probably Conscious.

[3] This was quoted in a detailed critique of IIT by Michael Cerullo: The Problem with Phi: A Critique of Integrated Information Theory

_____

Clinical neuroscientist and writer Raymond Tallis may be the most eloquent critic of neuroscientific approaches to consciousness:

What neuroscience cannot tell us about ourselves

What consciousness is not

_____

The image was cobbled together from clipart, and used as an illustration for an essay exploring the idea that the brain is like a radio that "receives" consciousness.