Ways of Knowing

by Yohan J. John

Drawing1Once, some years ago, I was attending a talk by the philosopher Slavoj Žižek at the Brattle Theatre in Cambridge, Massachusetts. He was engaged in his usual counterintuitive mix of lefty politics and pop culture references, and I found myself nodding vigorously. But at one point I asked myself: do I really understand what he is saying? Or do I simply have the feeling of understanding? As a neuroscientist, I am acutely aware of the mysterious and myriad ways in which brain areas are connected with each other and with the rest of the body. There are many pathways from point A to point B in the brain: perhaps Žižek’s words (and accent and crazed physical tics) had found a shortcut to the ‘understanding centers’ (whatever they might prove to be) in my brain? Perhaps my feeling of comprehension was a false alarm? Had I been intellectually hypnotized?

One way to check would be to try and explain Žižek’s ideas for myself. A handy sanity check might involve directing my explanations at other people, since I knew from first-hand experience that a set of ideas can seem perfectly coherent when they float free-form in one’s head, but when it comes time for the clouds of thought to condense into something communicable, very often no rain ensues. (This often happens when it’s time for me to translate my ruminations into a 3QD essay!)

Many of us like to see ourselves as members of a scientific society, where rational people subject ideas to rigorous scrutiny before filing them in the ‘justified true belief’ cabinet. But there are many sorts of ideas that can’t really be put to any kind of stringent test: my ‘social’ test of Žižek’s ideas doesn’t necessarily prove anything, since most of my friends are as left-wing (and susceptible to pop cultural analogies) as I am. This is the state of many of the ideas that seem most pressing for individuals and societies: there aren’t really any scientific or social tests that definitively establish ‘truth’ in politics, history or aesthetics.

To attempt an understanding of understanding, I think it might make sense to situate our verbal forms of knowledge-generation in the wider world of knowing: a world that includes the forms that we share with animals and even plants. To this end, I’ve come up with a taxonomy of understanding, which, for reasons that should become apparent eventually, I will organize in a ring. At the very outset I must stress that in humans these ways of knowing are very rarely employed in isolation. Moreover, they are not fixed faculties: they influence each other and gradually modify each other. Finally, I must stress that this ‘systematization’ is a work in progress. With these caveats in mind, I’d like to treat each of the ways of knowing in order, starting at the bottom and working my way around in a clockwise direction.

1. Instinct & Intuition

Instinct is the primordial form of knowing: we see versions of it even in single-celled organisms. It consists of the most basic tendencies, habits and drives that an organism possesses when it is born. We typically think of it as being encoded in our genome, but much of it emerges only through interaction with the environment. An amoeba, for example, ‘knows’ how to follow a chemical gradient in search of food. The seed of a plant ‘knows’ when spring has sprung, and sends out a root and a shoot in the appropriate directions. A hatchling doesn’t know who its mother it, but it ‘knows’ how to imprint, and therefore pick the most likely candidate. Newborn mammals instinctively know how to suckle. Primordial knowledge takes the form of know-how.

We seldom pause to marvel at all that a human baby knows without anything resembling teaching. Perhaps the most crucial and little-known form of baby ‘know-how’ is joint attention. There are three levels of joint attention. The first level just involves a baby and an adult looking at the same object. The adult may know what the baby is looking at, but we can’t really tell if the baby knows that the adult shares something in common with it. We know that many animals have this form of joint attention. And humans can share attention with some animals: you can point to an object and get your dog to look at it. The next level is ‘diadic’ attention. It takes the form of a conversation: the baby and the adult look at each other, and ‘exchange’ facial expressions. They direct their attention at each other. Something like this exists in many animals too: the way birds interact in song is an example that I am happily subject to at this very moment. The highest form of joint attention is ‘triadic’ attention. It happens when a baby and an adult look at each other, then look at an object, and then look back at each other. The baby and the adult often smile at the end of such an episode. This seems to be a form of acknowledgement: “I see what you’re seeing there! And I like it!”

2. Naming and description

A baby’s ability to attend to something that an adult draws her attention to is the most important stepping stone on the road to language, and therefore to wider understanding. We cannot learn the names of people, objects, and processes unless we share attention with the namer of names. This remains a mysterious process, since it isn’t always clear where the boundary of one thing ends and that of another begins. Nevertheless, we know that without the ability to comprehend naming, little or no symbolic communication would be possible.

The process of associating a name with a thing relies on the ability to divide our sensory world into discrete units. As a child grows up, it seems as if she shifts from an almost mystical holism to a state in which individual things become apparent. Once she can attend to people, animals and things she can learn their names. We typically see this as a one-way process in which an adult indicates something to the child and repeatedly says (or signs) its name. But we know that the ability to create language can arise even in children isolated from adults. Twins can devise their own idiosyncratic languages even when they have been deprived of social contact with adults.

Soon after facility with naming develops, abstraction can emerge. If a word like ‘dog’ applies to multiple distinct sensory experiences — ranging, perhaps from a Chihuahua to a Great Dane — then the word can’t be the name of a unique thing. It is a category. We take the ability to recognize abstract categories — like ‘dog’ or ‘green’ or ‘three’ — for granted, so much so that we often forget that these words are abstract categories. ‘Dogness’, ‘greenness’ or ‘threeness’ are not in any sense given to us by our sensory experience. The people who are most aware of this are those familiar with the history of artificial intelligence and machine learning. Until the recent boom in pattern recognition via artificial neural networks, machine categorization algorithms could not do as well as small children. Until fairly recently, no computer could reliably tell the difference between dogs and cats.

The ability to name objects and categorize according to abstract properties leads directly to description, which may be the most basic form of verbal understanding in adults. If two people can describe a situation or a phenomenon using roughly the same words, then we know they’re at least on the same page, even if they disagree on other matters. The concept of ‘sameness’ is, however, a subtle thing, and one we don’t necessarily understand completely.

3. Narrative

HérosmaîtrisantunlionOnce upon a time humans discovered names and descriptions. Then we started to arrange our descriptions in time. Thus, we invented stories. The hearing and telling of stories is central to growing up, and seem to bootstrap our integration into society. Since our earliest recorded history, stories have been a crucial way for us to understand and express nature and the human condition. The spirits and gods of mythology were, among other things, animating principles that helped account for both the chaos and the order that humans discover in in the world. The interactions between these anthropomorphic forces typically took the form of stories. In the great myths that still enchant billions worldwide, these stories gradually grew into baroque narrative complexes featuring bizarre family trees, wars, curses, magical boons, reincarnations, and, eventually, moral and ethical lessons. But simply listening to a narrative may not always be enough to understand what it signifies: we may need to engage in discourse — a far more elaborate form of language use.

Before we get to discourse, we ought to pause to recognize that the murky concept of sameness or similarity crops up in narrative too. To be anthropomorphic means to be human-like. This suggests that myth-makers were capable of looking at natural phenomena — in all their inhumanity and particularity — and abstract out certain general features that they saw as overlapping with human behavior. Over time, several ancient civilizations seem to have decided that the human being was an inadequate yardstick for measuring phenomena: they gradually created more abstract conceptual entities, tying properties together in ways that wouldn’t make sense for anything human-like. Very few humans would identify themselves with descriptors like ‘omnipresent’ or ‘omnipotent’.

Modern society may disdain the mythological modes of understanding, but narrative remains central to how the general public understands science. The most widely read pop science books tends to have a narrative form: pioneering scientists might be described as heroes defeating the monsters of ignorance, or taming wild natural forces. Or the scientific ideas themselves come to be expressed in story form. Selfish genes and blind watchmakers are narrative devices: they allow us to see the universe in terms familiar even to children. We can relate to these concepts. Once again we see that the act of comparison serves as a basis for understanding.

This is not to say that narratives are always wrong or misleading (though in the case of selfish genes they are): scientists themselves often require a central narrative in order to make sense of their own results and communicate them with their peers. Regardless of how complex the tools of science become, we seem inexorably drawn towards accounts of nature that take the form of stories. Perhaps the irritation that many feel when confronted by quantum physics stems ultimately from how un-story-like it can seem. Big Bang cosmology, by contrast, has something in common with the ‘let there be light’ narrative from Genesis. And perhaps the most strikingly narrative form of understanding is be the idea that the whole universe is some kind of computer program. A program is, after all, a kind of story. An algorithm is a set of step by step instructions, executed one by one, like a plot unfolding in sequence. Digital physics seems to resonate with the opening of the Gospel of John: “In the beginning was the word, and the word was with God, and the word was God”. (Presumably the computationalists would replace ‘God’ with some kind of variable declaration or header file.)

4. Discourse

It’s hard to imagine that there was ever a human society that relied solely on narrative forms of communication. Our earliest language-equipped ancestors most likely used a wide spectrum of tools: commands, questions, pleas, exclamations, exhortations, prayers, songs and so on. We can imagine that simple forms of explanation existed from the earliest times: they must have involved versions of show-and-tell. “Here is how you chip a stone to make a sharp tool,” or “ Here’s how you start a fire.” Ordinary language is the bridge that links know-how with know-what. Overarching philosophical and scientific theories were perhaps unnecessary for simply getting by. This basic form of understanding is central to acquiring physical skills: riding a bike, playing an instrument, or even using a scientific instrument. Communicating this kind of knowledge typically requires hands-on practice and two-way interaction with a teacher. It is therefore very closely related with instinct and intuition: a gifted musician may not always be able to explain in words, even to herself, how she achieves a particular sound.

In our repertoire of questions, we have the practical-minded ‘what’ and ‘how’, but also the type of question that may be the most mischievous of all: ‘why’. It is easy to imagine some stone-age firestarter wondering why exactly hitting flint stones together produced a spark. In modern times, some scholars have attempted to explain mythology in terms of responses to such questions. Perhaps exceptionally long stories were ways for adults to deal with the endless series of whys that children are especially prone to ask? With a long enough story the questioner would eventually just fall asleep or get bored! In this light, mythology (and religion more generally) becomes a kind of failed attempt at scientific explanation, mixed with some vague desire to silence a question without actually answering it. This strikes me as a case of reading modern intentions into the minds of ancient people. When examined in detail, mythology routinely confounds modern analysis. Joseph Campbell might see Jungian psychological archetypes in myth. Others might see garbled histories, or even encrypted science and mathematics. As the religion scholar James P. Carse writes in Finite and Infinite Games, “Mythology provokes explanation but accepts none of it.”

Theories of mythology constitute a genre of explanation that exemplifies both the strengths and the weaknesses of discursive thought. Such explanations often ring true: they seem to tie together disparate notions we already believed about human psychology and history. They can often be quite inspirational, stimulating art and literature. Star Wars would not exist without The Hero With a Thousand Faces.

One-of-the-best-moments-on-colbert-report-was-when-he-coined-truthiness-in-2005.jpgBut of course even a wildly inaccurate idea can be stimulating. Perhaps these explanations arrive at intuitive ‘truthiness’ by exploiting the shortcuts to the ‘centers of understanding’ (that Žižek may or may not have discovered in me). Theories of mythology don’t come with any clear test of truth or internal consistency. Nevertheless, we employ some comparison process when we assess these theories: that’s what allows us, for example, to recognize additional supporting evidence. “Yes of course, this other myth I read also fits Joseph Campbell’s monomyth pattern.” As is well known, we seem to be much worse at seeking out falsifying evidence.

Discourse and narrative and are the primary modes of public understanding in modern society. In politics, people do not derive their beliefs from a set of logical principles, or test them in a laboratory. Even when presented with evidence that their notions might be problematic, people often dig in further, coming up with reasons why the evidence might be untrustworthy or interpreted differently. Here we encounter a fundamental aspect of understanding: it does not simply rely on agreement between an explanation and the outside world, but on subjective coherence: the agreement between an explanation and other bits of knowledge and know-how. This is true of all the ways of knowing that I list in this essay, but it becomes most pronounced here, in the domain of informal discourse.

In fact informal knowledge supplies humanity with one of its greatest barriers to understanding: common sense. Common sense allows us to readily deploy the received wisdom of society, navigating the typical challenges of life without too much difficulty (or rather, the precise amount of difficulty that common sense tells us to expect). As I mentioned earlier, different forms of knowing influence each other: this is particularly pronounced for the link between intuition and discursive common sense. Common sense understandings of the world seem to influence how we perceive the world intuitively. In this way, derived knowledge comes to seem natural, and therefore intrinsic to the world rather than to the mind and to society.

RGB_illuminationThis was vividly illustrated to me recently in the context of human color vision. It is widely acknowledged by scientists that color is not ‘out there’ in the world in the way that matter is: it is a complex product of the interaction between light and the visual system. Only this way of framing things can explain, for example, the fact that color mixing works. Beams of red- and green-wavelength light act together on the visual system to produce the sensation of yellow, but the beams of light does not affect each other ‘out there’. The red- and green-wavelength beams of light do not combine to produce yellow-wavelength light: the seeing of yellow in this case happens without any yellow light in the external world. Similarly, only this perspective explains the strange phenomenon of magenta. There is no magenta-wavelength light: the perception of magenta can only occur when blue and red light (which are on the opposite ends of the wavelength spectrum) arrive at the eyes. Magenta might best be described as ‘the absence of green’.

All of this is admittedly counter-intuitive. So much so that when I explained this on the question-and-answer site Quora, I received a barrage of angry comments. My interlocutors insisted that color was really out there in the world; a few even accused me of intentionally spreading misinformation! All this stemmed from their common sense picture of the world, which arises not just from primordial know-how, but from our social naming conventions and our discursive traditions. Most societies tend to assign properties to objects themselves rather than to the interaction between self and object. When people compare a counter-intuitive assertion (“colors are ‘in your head’”) with their background understanding (“the properties of things are in the things”), the results sometimes come out in favor of the incorrect background understanding.

The only way out of this hole is systematic thought. Unfortunately, our educational systems routinely fail to impart this way of knowing to students, so they are unable to overcome their common sense — even in situations such as color vision, where all the necessary evidence is readily available (particularly if you have a computer or smartphone).

5. Philosophy & logic

Systematic thought is easier said than done, however. Many of the earliest forms of philosophy involve deduction: deriving specific truths from general principles that are supposedly self-evident. On the diagram I drew, philosophy and logic are diametrically opposed to intuition. As far as I can tell that was a fluke, but this much is true: philosophers have always been able to use systems of thought to construct highly counter-intuitive statements, which they then place credence in (or at least claim to). Perhaps the most notorious example is Zeno’s paradox. Here is one of the forms of the paradox:

“In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.” – as recounted by Aristotle, Physics VI:9, 239b15

Zeno_Achilles_ParadoxZeno used this line of thinking to come to the conclusion that all forms of motion are illusory. Zeno used a method of proof that is still employed in philosophy — the reductio ad absurdum — to arrive at a wholesale rejection of one of the most intuitively (and experimentally) obvious things one can imagine: change itself. (One might conclude that logic should be abandoned, but this in itself may be a reductio ad absurdum.)

No doubt ancient philosophy did not always lead to nonsensical conclusions, but one suspects that this was because some philosophers relied on more than just logic: they were able to employ what we would retrospectively label experimental science. In this way they could find a happy medium between faulty intuition and faulty reasoning. Even mathematics, which to this day enjoys a nebulous status (is it pure human rationality? is in some sense a science?) allows for some forms of experimental checking. Euclid’s Elements, which may represent the high watermark of ancient reasoning based on first principles, involved proofs that could be checked using the tools of geometry. As the number of successes mounted with every check of this sort, intuition itself must have been modified in Euclid and his peers and followers, allowing them ultimately to see the postulates as self-evident.

But even philosophers who were often capable of this kind of balance were often led down the garden path by ‘reasoning’: Aristotle, for example, apparently believed that women had fewer teeth than men. I suspect that both the power and the danger of systematic thought come from its simplicity. First principles are an intellectual Swiss Army knife: they are a set of tools that are easy to carry with you, you can do quite a bit. If used carefully, formal reasoning can guide thinking quite well in a limited set of contexts. But if you try to apply it in new contexts, particularly those in which testing is difficult or impossible, there is no guarantee of accurate results. One might think that this was simply a bug in pre-scientific philosophy, but that would be a mistake. Even our most advanced and mathematically accurate sciences can lead in incorrect directions. As physicists Robert Laughlin and David Pines wrote in their paper ‘The Theory of Everything’ from 1999:

“But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods…”

When we are in the domain of the untested and unprecedented, our thought-systems often fail quite miserably. Why then, given this kind of history, do people persist in searching for overarching theories of everything?

The first and most obvious answer is that we can’t be sure what will be testable in the future. For now string theory might be metaphysical speculation, but some clever experimentalist might some day find a way to test it. But possible future usefulness does not strike me to be the primary reason we seek out all-encompassing theories. After all, philosophical and religious theories over the centuries have rarely been motivated by modern expectations of scientific testability. I think the answer lies in the following comment by Ludwig Wittgenstein:

“Remember that we sometimes demand explanations for the sake not of their content, but of their form. Our requirement is an architectural one; the explanation a kind of sham corbel that supports nothing.”

Explanations are not merely useful in a practical sense: they are also beautiful. A grand theory is a structure, and can therefore be judged according to aesthetic principles. And because one person’s meat is another’s poison, we should expect theories to proliferate in places where testing either cannot be performed or hasn’t yet been performed. Thus, near the border-territory of experimental physics we have competing cosmological theories (multiverses, modified gravity and so on), and just outside the borders we find string theory, as well as the various interpretations of quantum mechanics. Still further out we find the non-mathematical ontologies that many people feel an intuitive need for. As long as the reach of these theories exceeds their grasp, we have to assume that we prefer one over its competitors for aesthetic reasons.

Beyond aesthetics, I think a grand theoretical scheme provides a map for human knowledge and behavior. Maps are perhaps the paradigmatic example of a model. Armed with a conceptual map, it seems as if uncertainty has been banished: the map’s roads, highways and footpaths — many of them purely imaginary — link disparate domains of human experience. If one intended to travel from point A to point B, a map would tell you what to expect along the way. Avoiding surprise in this way appears to be one of the central goals of learning. One popular theory of ‘computational aesthetics’ argues that ‘interestingness is the first derivative of beauty’: we seek out new things not simply for how they make us feel, but because they promise to improve our existing categories of experience, thereby reducing surprise in the future. So, perhaps paradoxically, seeking novelty is how organisms attempt to conquer it.

Grand but untested theories are strange in this regard, because we can’t really use them to navigate the external world. However, we can use them to navigate the internal world: our previously-existing knowledge. A systematic theory of the universe is a mnemonic: a conceptual filing system for locating information in one’s own head, if nowhere else. Perhaps this is the sense in which philosophy is seen as a form of therapy. Philosophy tends to leave the world as it is… but it does not leave the philosopher as she is. Those seeking grand theories must be the neat freaks of the intellectual world: they seek a place for everything and everything in its place.

6. Qualitative science

I’ve already touched on science in the previous segment, but it really only comes into its own when it goes beyond philosophical and even logical theorizing. Frustratingly, the differences between science and other forms of understanding are rarely understood even by educated non-scientists. People who come up with crank science seem to view knowledge solely in terms of aesthetic principles and intuitive intelligibility, rather than in terms of alignment between explanation and experiment.

All science (and all experience) starts out with subjective qualities. When groups of people agree on these qualities, they can commence the quest for invariants: the events that recur in multiple contexts. Knowing the contexts that predict the recurrence of events is the basis for all science and technology. The earliest of these events to be recognized may have involved the interlocking cycles of days and months and seasons.

It can seem as if quantification is essential to science, but this is not always the case. The best example of this is Charles Darwin’s theory of evolution by natural selection. It arose well after the scientific revolution ushered in by Isaac Newton, but it was far less quantitative than the physics of the time. No doubt Darwin investigated how traits change in distribution in a species from one generation to the next, but for the purposes of his theory, only very rough numbers were needed to make the case. Darwin’s theory requires us to believe four eminently reasonable ideas:

  1. Organisms inherit some of their traits from their parents.
  2. Variation of heritable traits arises.
  3. New traits affect the fitness of the organism with respect to its environment
  4. The fitness of an organism affects the relative number of offspring it produces compared to competitors.

If one accepts these postulates, then one really ought to assent to the idea that natural selection will lead to the proliferation of particular traits in a population. Prior awareness of human heredity and selective breeding in plants and animals helped many people get over their prior common sense notions of unchanging design or the inheritance of acquired traits. Once fellow scientists were able to wrap their heads around the theory, they could join Darwin in searching for further evidence: transitional fossils and common features in related species. Assessing the evidence relies on intuitive notions of similarity — notions that we still haven’t made completely explicit even in the 21st century. Regardless, we can see that qualitative theories clearly allow for prediction, and therefore the feedback and self-correction that mark the best examples of modern science.

7. Quantitative science

This aspect of science is so well known that it barely requires further elaboration. When the qualities recognized through perception are aligned with external measuring devices, we turn the process of comparison into an objective act that can be performed by almost anyone. The acts of measuring and counting bring mathematics into contact with qualities, creating the ‘hard’ sciences. Mathematical prediction involves a translation process. Qualities perceived in an experiment become associated with measurable quantities, and then represented as abstract symbols. These symbols are arranged in mathematical statements, which are manipulated using the laws of science and mathematics (and, as Laughlin and Pines point out, a touch of art), resulting in new mathematical expressions. The abstract symbols are then replaced by known quantities, allowing us to predict the values of unknown quantities.

It is important to recall that mathematical prediction predates the scientific revolution. Ancient peoples armed with rudimentary mathematical tools were capable of predicting the movements of the sun, the moon, and the visible planets. But their methods were somewhat ad hoc: Ptolemaic epicycles were a form of curve-fitting. They could be used to approximate any periodic movement, but they didn’t suggest a unifying physical principle. Newton’s physics provided a starting point for a new way of thinking: objects in space and objects on earth might appear to behave very differently, but they were similar in that they obeyed the same mathematically-defined physical laws.

A physical law captures a regularity in the universe: an analogical relationship between measurables, often linked with each other through hypothetical or not-yet-measured entities. One might ask why these mathematical tools work at all. In the 20th century, one famous physicist, Eugene Wigner pondered 'The Unreasonable Effectiveness of Mathematics in the Natural Sciences'. What gives mathematics the power to align with the physical world? Why is it possible to predict aspects of the world using symbols that seem so different from the things they represent?

8. Models and simulations

Prima_Europe_tabulaThe question of the effectiveness of mathematics seems to defy any easy intuitions, and remains a topic of speculation. But another tool of science rarely evokes the same sort of chin-scratching: the model or simulacrum. As I mentioned earlier, a map may be the paradigmatic example of a model. It has a structural relationship with the thing it represents. This relationship is so intuitive to most people, even children, that we might find it funny if a cartographer were to write a paper called 'The Unreasonable Effectiveness of Maps in the Geographical Sciences'. Maps and models appeal directly to our intuitive sense of similarity, and for this reason rarely call out for explanation in terms of our other cognitive tools. The simplest models also obviate the need for an elaborate theory of causality. When you see how clockwork functions, any further explication seems superfluous. I remember putting together a Lego Technics car as a child: it had a working steering wheel and an engine with moving pistons. I don’t think a verbal explanation of what was happening would have added much to my intuitive feeling of comprehension.

When I was ruminating over the content of this essay, I thought of this list of ways of knowing as a chronological series, with computational models of the sort that I do being the newest tool in the toolbox. Compared to the tools available to Newton, a computational model can seem like a qualitative leap. But I quickly realized that models as such have been around for quite a while. And not just maps. Ancient and medieval architects, for example, did not build large structures through full-scale trial and error: they experimented with smaller models or maquettes. Even more elaborate physical models existed in antiquity: the Antikythera Mechanism, a kind of clockwork analog computer, depicted the movements of the planets in a way that could be directly compared with observation.

Modern computational modeling — of weather systems, economies, biological neural networks and other complex systems — represents a coming together of various other forms of understanding. They combine qualitative and quantitative scientific observations, but do not simply spit out measurable predictions. They make the task of comparison between quantities and reality easier by translating the quantities back into qualities. A weather model might produce something that looks like real satellite imagery. A neural model might produce something that looks like the electrophysiological recordings from a real brain. The similarity can be quantified to assess how well the model fits reality, but in the case of a truly successful model, this can seem unnecessary: the output of the model and the experimental measurement can be superimposed on each other. In other words, the output of the model is virtually indistinguishable from the outcome of a corresponding experiment.

Modern computational models often attempt to go beyond mere surface similarity (which might be dangerously close to 'cargo cult science'). In the case of my field, neuronal modeling, this might amount to creating a model of a brain region (or the whole brain) that is composed of model neurons connected just as real neurons are in the brain. The goal of such modeling is not simply to fit the ‘global’ phenomenon uncovered by an experiment (which turns out to be relatively easy), but to predict (without fixing parameters each time) what will happen in new situations, or when individual parts go wrong. Thus the ideal neuron-based model of the brain would show how particular injury or genetic disorders might affect psychology or behavior, and then also show how doctors might treat the symptoms or even restore the brain to its earlier state. A detailed model resembles the actual phenomenon at multiple levels of investigation.

No model of any complex system has reached this level of alignment with its target. It may simply be a matter of time, but it may also be a strange consequence of the nature of modeling. A model, whether physical or computational, is itself a physical phenomenon: it does not exist in some Platonic realm. As we make a simple intuitive model more complicated, it may no longer be intelligible. When small and seemingly simple things are put together, complex emergent behavior often arises. Thus the paradox of modeling — for many people the most intuitively satisfying form of understanding, and therefore ‘adjacent’ to intuition in my diagram — is that our very attempts to make a model accurate take it out of the realm of understanding, and push it into the realm of the phenomena in need of explanation. As Paul Valery said, “Everything simple is false. Everything complex is unusable."

Drawing1

The ‘ways of knowing’ diagram makes intuition seem like just one of several modes of knowledge. But intuition is more than that. It serves as the connective tissue linking all the other forms: it seems always present in the background, even when we engage in highly symbolic forms of reasoning. Consider linguistic knowledge. Let’s say you’re asked to speak extemporaneously about something you know a lot about. Where do your words come from? For most people I suspect they just emerge out of nowhere. There is no consciously perceptible ‘staging zone’ where words are prepped before being ejected from the mouth.

Instinct and intuition suffuse all other forms of knowing. Our basic ability to compare our knowledge, and the actions stemming from our knowledge, with the world and with the body seem to occur in this domain, just on the fringes of conscious experience. No diagram of ways of knowing can capture this, because we still don’t know how it works. I now think that what I have represented is merely the conscious shadow of knowing: the aspects of understanding that we can represent consciously to ourselves and therefore communicate with each other. Intuition serves more as a placeholder than a truly fleshed-out concept. We know it’s there, but we don’t really know how it develops, when it’s right, and when it’s wrong.

The suspicion I felt when I experienced ‘truthiness’ during the Žižek talk might simply be a necessary way of dealing with the fact that there are two modes of knowing: explicit and implicit. It seems as if intuition’s role is to point the conscious mind in the direction of potential discovery. The role of the conscious mind in turn is to ensure that intuition is well-trained.

Unlike the other points on the octagon, intuition cannot be taught or imparted directly. At best, we can show people how to do things as we do, and hope that eventually their intuitions align without ours. This is what happens eventually in the case of science and mathematics education, but it is also a guiding spirit in art, music, cooking… every technique that humans are capable of learning. Our explicit ways of knowing grow out of an unconscious reservoir of implicit knowledge, all the while modifying it and being modified by it. Our conscious ways of knowing therefore seem to represent the visible tip of an iceberg of unknown depth. Perhaps to be truly educated, regardless of the field, is to regularly traverse the loop from instinct to explicit knowledge and back again. Only very rarely can the conscious will translate implicit knowing into explicit knowledge. But then again perhaps it doesn't even need to: perhaps wisdom consists in accepting that there are forms of knowing that can only become manifest when the will recognizes its limits.

——

The diagram was drawn in Inkscape. Other images were taken from Wikipedia.