Monday Musing: The Palm Pilot and the Human Brain, Part II

Part II: How Brains Might Work

Chess7Two weeks ago I wrote the first part of this column in which I made an attempt to explain how it is that we are able to design very complex machines like computers: we do it by employing a hierarchy of concepts, each layer of which builds upon the layer below it, ultimately allowing computers to perform seemingly miraculous tasks like beating Gary Kasparov at chess at the highest levels of the hierarchy, while all the way down at the lowest layers, the only thing going on is that some electrons are moving about on a tiny wafer of silicon according to simple physical rules. [Photo shows Kasparov in Game 2 of the match.] I also tried to explain what gives computers their programmable flexibility. (Did you know, for example, that Deep Blue, the computer which drove Kasparov to hair-pulling frustration and humiliation in chess, now takes reservations for United Airlines?)

But while there is a difference between understanding something that we ourselves have built (we know what the conceptual layers are because we designed them, one at a time, after all) and trying to understand something like the human brain, designed not by humans but by natural selection, there is also a similarity: brains also do seemingly miraculous things, like the writing of symphonies and sonnets, at the highest levels, while near the bottom we just have a bunch of neurons connected together, digitally firing (action potentials) away, again, according to fairly simple physical rules. (Neuron firings are digital because they either fire or they don’t–like a 0 or a 1–there is no such thing as half of a firing or a quarter of one.) And like computers, brains are also very flexible at the highest levels: though they were not designed by natural selection specifically to do so, they can learn to do long-division, drive cars, read the National Enquirer, write cookbooks, and even build and operate computers, in addition to a million other things. They can even turn “you” off, as if you were a battery operated toy, if they feel they are not getting enough oxygen, thereby making you collapse to the ground so that gravity can help feed them more of the oxygen-rich blood that they crave (you know this well, if you have ever fainted).

Jeff_hawkinsTo understand how brains do all this, this time we must attempt to impose a conceptual framework on them from the outside, as it were; a kind of reverse-engineering. This is what neuroscience attempts to do, and as I promised last time, today I would like to present a recent and interesting attempt to construct just such a scaffolding of theory on which we might stand while trying to peer inside the brain. This particular model of how the brain works is due to Jeff Hawkins, the inventor of the Palm Pilot and the Treo Smartphone, and a well-respected neuroscientist. It was presented by him in detail in his excellent book On Intelligence, which I highly recommend. What follows here is really just a very simplified account of the book.

Let’s jump right into it then: Hawkins calls his model the “Memory-Prediction” framework, and its core idea is summed up by him in the following four sentences:

The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. (On Intelligence, p. 6)

Hawkins focuses mainly on the neocortex, which is the part of the brain responsible for most higher level functions such as vision, hearing, mathematics, music, and language. The neocortex is so densely packed with neurons, that no one is exactly sure how many there are, though some neuroscientists estimate the number at about thirty billion. What is astonishing is to realize that:

Those thirty billions cells are you. They contain almost all your memories, knowledge, skills, and accumulated life experience… The warmth of a summer day and the dreams we have for a better world are somehow the creation of these cells… There is nothing else, no magic, no special sauce, only neurons and a dance of information… We need to understand what these thirty billion cells do and how they do it. Fortunately, the cortex is not just an amorphous blob of cells. We can take a deeper look at its structure for ideas about how it gives rise to the human mind. (Ibid., p. 43)

The neocortex is a thin sheet consisting of six layers which envelops the rest of the brain and is folded up in a crumpled way. This is what gives the brain its walnutty appearance. (If completely unfolded, it would be quite thin–only a couple of millimeters–and would cover an area about the size of a large dinner napkin.) Now, while the neocortex looks pretty much the same everywhere with its six layers, different regions of it are functionally specialized. For example, the Broca’s area handles the rules of linguistic grammar. Other areas of the neocortex have also been mapped out functionally in quite some detail by techniques such as looking at brains with localized damage (due to stroke or injury) and seeing what functions are lost in the patient. (Antonio Damasio presents many fascinating cases in his groundbreaking book Descartes’ Error.) But while everyone else was looking for differences in the various functional areas of the cortex, a very interesting observation was made by a neurophysiologist named Vernon Mountcastle (I was fortunate enough to attend a brilliant series of lectures by him on basic physiology while I was an undergraduate!) at Johns Hopkins University in 1978: he noticed that all the different regions of the neocortex look pretty much exactly the same, and have the same structure, whether they process language or handle touch. And he proposed that since they have the same structure, maybe they are all performing the same basic operation, and that maybe the neocortex uses the same computational tool to do everything. Mountcastle suggested that the only difference in the various areas are how they are connected to each other and to other parts of the nervous system. Now Hawkins says:

Scientists and engineers have for the most part been ignorant of, or have chosen to ignore, Mountcastle’s proposal. When they try to understand vision or make a computer that can “see,” they devise vocabulary and techniques specific to vision. They talk about edges, textures, and three-dimensional representations. If they want to understand spoken language, they build algorithms based on rules of grammar, syntax, and semantics. But if Mountcastle is correct, these approaches are not how the brain solves these problems, and are therefore likely to fail. If Mountcastle is correct, the algorithm of the cortex must be expressed independently of any particular function or sense. The brain uses the same process to see as to hear. The cortex does something universal that can be applied to any type of sensory or motor system. (Ibid., p. 51)

The rest of Hawkins’s project now becomes laying out in detail what this universal algorithm of the cortex is, how it functions in different functional areas, and how the brain implements it. First he tells us that the inputs to various areas of the brain are essentially similar and consist basically of spatial and temporal patterns. For example, the visual cortex receives a bundle of inputs from the optic nerve, which is connected to the retina in your eye. These inputs in raw form represent the image that is being projected onto the retina in terms of a spatial pattern of light frequencies and amplitudes, and how this image (pattern) is changing over time. Similarly the auditory nerves carry input from the ear in terms of a spatial pattern of sound frequencies and amplitudes which also varies with time, to the auditory areas of the cortex. The main point is that in the brain, input from different senses is treated the same way: as a spatio-temporal pattern. And it is upon these patterns that the cortical algorithm goes to work. This is why spoken and written language are perceived in a remarkably similar way, even though they are presented to us completely differently in simple sensory terms. (You almost hear the words “simple sensory terms” as you read them, don’t you?)

Now we get to one of Hawkins’s key ideas: unlike a computer (whether sequential or parallel), the brain does not compute solutions to problems; it retrieves them from memory: “The entire cortex is a memory system. It isn’t a computer at all.” (Ibid., p. 68) To illustrate what he means by this, Hawkins provides an example: imagine, he says, catching a ball thrown at you. If a computer were to try to do this, it would attempt to estimate its initial trajectory and speed and then use some equations to calculate its path, how long it will take to reach you, etc. This is not anything like what your brain does. So how does your brain do it?

When a ball is thrown, three things happen. First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted as it is recalled to accomodate the particulars of the moment, such as the ball’s actual path and the position of your body. The memory of how to catch a ball was not programmed into your brain; it was learned over years of repetitive practice, and it is stored, not calculated, in your neurons. (Ibid., p. 69)

At first blush it may seem that Hawkins is getting away with some kind of sleight of hand here. What does he mean that the memories are just retrieved and adjusted for the particulars of the situation? Wouldn’t that mean that you would need millions of memories for every single scenario like catching a ball, because every situation of ball-catching can vary from another in a million little ways? Well, no. Hawkins now introduces a way of getting around this problem, and it is called invariant representation, which we will get to soon. Cortical memories are different from computer memory in four ways, Hawkins tells us:

  1. The neocortex stores sequences of patterns.
  2. The neocortex recalls patterns auto-associatively.
  3. The neocortex stores patterns in an invariant form.
  4. The neocortex stores patterns in a hierarchy.

Let’s go through these one at a time. The first feature is why when you are telling a story about something that happened to you, you must go in sequence (and why often people include boring details in their stories!) or you may not remember what happened; like only being able to remember a song if you sing it to yourself in sequence, one note at a time. (You couldn’t recite the notes backward–or even the alphabet backward very fast–while a computer could.) Even very low-level sensory memories work this way: the feel of velvet as you run your hand over it is just the pattern of very quick sequential nerve firings that occurs as your fingers run over the fibers. This pattern is a different sequence in case you are running your hand over gravel, say, and that is how you recognize it. Computers can be made to store memories sequentially, such as a song, but they do not do this automatically, the way the cortex does.

Auto-associativity is the second feature of cortical memory and what it means is that patterns are associated with themselves. This makes it possible to retrieve a whole pattern when only a part of it is presented to the system.

…imagine you see a person waiting for a bus but can only see part of her because she is standing partially behind a bush. Your brain is not confused. Your eyes only see parts of a body, but your brain fills in the rest, creating a perception of a whole person that’s so strong you may not even realize you’re only inferring. (Ibid., p. 74)

Temporal patterns are also similarly retrieved and completed. In a noisy environment we often don’t hear every single word that someone is saying to us, but our brain fills in with what it expects to have heard. (If Robin calls me on Sunday night on his terrible cell phone and says, “Did you …crackle-pop… your Monday column yet?” My brain will automatically fill in the word “write.”) Sequences of memory patterns recalled auto-associatively essentially constitute thought.

Now we get to invariant representations, the third feature of cortical memory. Notice that while computer memories are designed for 100% fidelity (every bit of every byte is reproduced flawlessly), our brains do not store information this way. Instead, they abstract out important relationships in the world and store those, leaving out most of the details. Imagine talking to a friend who is sitting right in front of you. As you talk to her, the exact pattern of pixels coming over the optic nerve from your retina to your visual cortex is never the same from one moment to another. In fact, if you sat there for hours, no pattern would ever repeat because both of you are moving slightly, the light is changing, etc. Nevertheless you have a continuous sense of your friend’s face being in front of you. How does that happen? Because your brain’s internal pattern of representation of your friend’s face does not change, even though the raw sensory information coming in over the optic nerve is always changing. That’s invariant representation. And it is implemented in the brain using a hierarchy of processing. Just to give a taste of what that means, every time your friend’s face or your eyes move, a new pattern comes over the optic nerve. In the visual input area of your cortex, called V1, the pattern of activity is also different each time anything in your visual field moves, but several levels up in the hierarchy of the visual system, in your facial recognition area, there are neurons which remain active as long as your friend’s face is in your visual field, at any angle, in any light, and no matter what makeup she’s wearing. And this type of invariant representation is not limited to the visual system but is a property of every sensory and cortical system. So how is this invariant representation accomplished?

———————–

I’m sorry, but unfortunately, I have once again run out of time and space and must continue this column next time. Despite my attempts at presenting Hawkins’s theory as concisely as possible, it is not possible to condense it further without losing essential parts of it and there’s still quite a bit left, and so I must (reluctantly) write a Part III to this column in which I will present Hawkins’s account of how invariant representations are implemented, how memories are used to make predictions (the essence of intelligence), and how all this is implemented in hierarchical layers in the actual cortex of the brain. Look for it on May 8th. Happy Monday, and have a good week!

NOTE: Part III is here. My other Monday Musing columns can be found here.

Sunday, April 16, 2006

Medicine and Race

Also in the Economist, medicine factors in race:

LAST month researchers from the University of Texas and the University of Mississippi Medical Centre published a paper in the New England Journal of Medicine. They had studied three versions (or alleles, as they are known) of a gene called PCSK9. This gene helps clear the blood of low-density lipoprotein (LDL), one of the chemical packages used to transport cholesterol around the body. Raised levels of LDL are associated with heart disease. The effect of all three types of PCSK9 studied by Jonathan Cohen and his colleagues was to lower the LDL in a person’s bloodstream by between 15% and 28%, and coronary heart disease by between 47% and 88%, compared with people with more common alleles of the gene.

Such studies happen all the time and are normally unremarkable. But this was part of a growing trend to study individuals from different racial groups and to analyse the data separately for each group. The researchers asked the people who took part in the study which race they thought they belonged to and this extra information allowed them to uncover more detail about the risk that PCSK9 poses to everyone.

Yet race and biology are uncomfortable bedfellows. Any suggestion of systematic biological differences between groups of people from different parts of the world—beyond the superficially obvious ones of skin colour and anatomy—is almost certain to raise hackles.

How Women Spur Economic Growth

In the Economist:

[I]t is misleading to talk of women’s “entry” into the workforce. Besides formal employment, women have always worked in the home, looking after children, cleaning or cooking, but because this is unpaid, it is not counted in the official statistics. To some extent, the increase in female paid employment has meant fewer hours of unpaid housework. However, the value of housework has fallen by much less than the time spent on it, because of the increased productivity afforded by dishwashers, washing machines and so forth. Paid nannies and cleaners employed by working women now also do some work that used to belong in the non-market economy.

Nevertheless, most working women are still responsible for the bulk of chores in their homes. In developed economies, women produce just under 40% of official GDP. But if the worth of housework is added (valuing the hours worked at the average wage rates of a home help or a nanny) then women probably produce slightly more than half of total output.

The increase in female employment has also accounted for a big chunk of global growth in recent decades. GDP growth can come from three sources: employing more people; using more capital per worker; or an increase in the productivity of labour and capital due to new technology, say. Since 1970 women have filled two new jobs for every one taken by a man. Back-of-the-envelope calculations suggest that the employment of extra women has not only added more to GDP than new jobs for men but has also chipped in more than either capital investment or increased productivity. Carve up the world’s economic growth a different way and another surprising conclusion emerges: over the past decade or so, the increased employment of women in developed economies has contributed much more to global growth than China has.

A Close Look at Terror and Liberalism

Via Crooked Timber, The Couscous Kid over at Aaronovitch Watch has an extensive review of Paul Berman’s Terror and Liberalism (in 1, 2, 3, 4, 5, 6, 7 posts).

Tracing Berman’s arguments back to his sources isn’t always easy. There’s a “Note to the Reader” at the end that lists a few of the works consulted, but Berman habitually cites books without providing page references, and that irritates. (Terror and Liberalism doesn’t have an index, either, and that also irritates.) Sometimes you don’t need to chase up his references to find fault with the book. He calls Franz Ferdinand the “grand duke of Serbia” on p.32, for example, and he’s become the “Archduke of Serbia” by p.40, when he wasn’t either; Franz Ferdinand was the Archduke of Austria, and Serbia lay outside the Habsburg lands. (Funny, though, that the errors in basic general knowledge should come to light when it comes to dealing with Serbia and Sarajevo, of all places.) But much of the rest of the time, it’s an interesting exercise to compare what Berman says with what his sources say. I haven’t done this comprehensively in what follows (even I’ve got better things to do with my time), and I’m not saying anything in what follows about the two chapters on Sayyid Qutb because I haven’t read any of his works and don’t know much about him, apart from what Berman tells me, and, as will be clear from what follows, I don’t think Berman’s an entirely reliable source. But I have done a bit of checking around with some of the books that I’ve got to hand. How does Berman use his sources? Often carelessly, and not especially fair-mindedly, as we shall see.

battle in the brain

22881518

In debates over creationist doctrines, evolutionary biologists often are hard-pressed to explain how nature could make something as intricate as the human brain. Even Alfred Wallace, the 19th century biologist who discovered natural selection with Charles Darwin, could not accept that such a flexible organ of learning and thought could emerge by trial and error.

No two brains are exactly alike, despite their overall anatomical similarity. Each brain changes throughout a lifetime, altered by experience and aging. Even the simplest mental activities, such as watching a moving dot, can involve slightly different areas in different people’s brains, studies show.

Underlying every personal difference in thought, attitude and ability is an astonishing variety of brain cells, scientists have discovered.

more from the LA Times here.

the apology of peter beinart

The neoconservatives now pretty much argue that they’re the new anti-totalitarian liberals. They more or less accepted the principles of the New Deal in the ’50s and ’60s, and largely feel that they’ve carried on the tradition of liberal interventionism. What I’d like to know from you is this: what part of Schlesinger, Truman, and Scoop Jackson’s lunch have the neocons not eaten?

That’s an important purpose of the book, to argue against that idea, and I would say a couple of things. The first is that the recognition of American fallibility is a very critical element of the liberal tradition, very central to Niebuhr’s thinking, which then became an important element in the Truman administration. That idea manifests itself internationally in a sympathy for international institutions, a belief that while it’s possible that the United States can be a force for good—indeed, that America must be a force for good in the world, which is certainly what neocons believe—that America can also be a force for evil. That since America can be corrupted by unrestrained power, America should take certain steps to limit its power and to express it through international institutions. That, I think, is the first element of the liberal tradition that has been lost in neocon thinking.

The second element that’s been lost, I think, is the recognition that America’s ability to be a force for good in the world rests on the economic security of average Americans. The early neocons had a certain sympathy for the labor movement, and the labor movement was a very important part of Cold War liberalism, because the ability of the United States to be generous around the world really depended on the government’s willingness to take responsibility for the economic security of its own people. Of course, that would have to mean something different today than it did in the 1950s. But widespread economic security remains a very important basis upon which the United States can act in the world, because it maintains the support of the American people for that action. I think that has been lost in neocon thinking since they adopted the—as I see it— quite radical economic ideology of the American Right.

more from the interview at the Atlantic Unlimited here.

stage left

Odets

On April 17th, to mark the centennial of the birth of the playwright Clifford Odets, Lincoln Center Theatre will open a new production of “Awake and Sing!,” Odets’s first full-length play and the one that made him a literary superstar in 1935, at the age of twenty-eight. In the years that followed, this magazine dubbed Odets “Revolution’s No. 1 Boy”; Time put his face on its cover; Cole Porter rhymed his name in song (twice); and Walter Winchell coined the word “Bravodets!” “Of all people, you Clifford Odets are the nearest to understand or feel this American reality,” his friend the director Harold Clurman wrote in 1938, urging him “to write, write, write—because we need it so much.” “You are the Man,” Clurman told him.

more from The New Yorker here.

goytisolo

16goyt1190

On a blazing blue afternoon last winter, I met the Spanish expatriate novelist Juan Goytisolo at an outdoor cafe in Marrakesh. It was easy to spot the 75-year-old writer, sitting beneath an Arabic-language poster of himself taped to the cafe window. He was reading El País, the Spanish newspaper to which he has contributed for decades. Olive-skinned, with a hawk nose and startlingly pale blue eyes, he had wrapped himself against the winter chill in a pullover, suede jacket, checked overcoat and two pairs of socks.

Considered by many to be Spain’s greatest living writer, Goytisolo is in some ways an anachronistic figure in today’s cultural landscape. His ideas can seem deeply unfashionable. For him, writing is a political act, and it is the West, not the Islamic world, that is waging a crusade. He is a homosexual who finds gay identity politics unappealing and who lived for 40 years with a French woman he considers his only love. “I don’t like ghettos,” he informed me. “For me, sexuality is something fluid. I am against all we’s.” The words most commonly used to describe his writing are “transgressive,” “subversive,” “iconoclastic.”

more from the NY Times magazine here.

How Bush’s Bad Ideas May Lead to Good Ones

From The Chronicle of Higher Education:

Book_8 If, like me, you are in the business of ideas, the presidency of George W. Bush is a dream come true. That is not because the president is fond of the product I produce; on the contrary, he may be the most anti-intellectual president of modern times, a determined opponent of science, a man who values loyalty above debate among his associates. But governance is impossible without ideas, and by basing his foreign and domestic policies on so many bad ones, President Bush may have cleared the ground for the emergence of a few good ones.

Imposter Two recent books by writers long identified with conservative points of view — one dealing with foreign policy, the other with domestic concerns — suggest just how bad the ideas associated with the Bush administration have been; America at the Crossroads: Democracy, Power, and the Neoconservative Legacy (Yale University Press,2006) and Impostor: How George W. Bush Bankrupted America and Betrayed the Reagan Legacy (Doubleday, 2006).

More here.

The Great Escape

From The Loom:Frog20nose

At the Loom we believe that the path to wisdom runs through the Land of Gross. We do not show you pictures of worms crawling out of frog noses merely to ruin your lunch. We do not urge you to check out these freaky videos of worms crawling out of frog mouths and fish gills merely to give you something to talk about at the high school cafeteria table tomorrow (Dude, you totally will not believe what I saw…) These images have something profound to say.

The worm in question is the gordian worm or horsehair worm, Paragordius tricuspidatus. It has become famous in recent months for its powers of manipulation. The gordian worm lives as an adult in the water, where they form orgiastic knots. They lay eggs at the edge of the water, which can only mature if they’re ingested by insects such as crickets. The worms feed on the inner juices of the crickets until they fill up the entire body cavity. In order to get back to the water, the gordian worms cause their hosts hurl themselves into ponds or streams. As the insects die, the worms slither out to find the nearest mating knot.

More here.

Saturday, April 15, 2006

Danto on the Whitney Biennial

Arthur Danto on the 2006 Whitney Biennial, “Day for Night”, in The Nation:

The Biennial 2006 is in one sense exemplary: It gives a very clear sense of what American art is in the early twenty-first century. American art has been increasingly autonomous in recent times, and in large part concerned with the nature of art as such. To be sure, it has explored issues of identity politics and multiculturalism, and sometimes worn its political virtues on its sleeve. But gestures like Serra’s reflect artistic decisions, not something in the culture that the art passively mirrors. Even at its most political, the art here does not project much beyond the conditions of its production.

It would thus be a mistake to look to “Day for Night” for a reflection of the spirit of our time, much less a critique of what is wrong with the state of the world. By raising such expectations, “Day for Night” sets itself up for failure–through no fault of the art on view. Much of the work is smart, innovative, pluralistic, cosmopolitan, self-critical, liberal and humane. It might not aspire to greatness, or take much interest in beauty or in joy. But in general, the art in the Biennial mirrors a better world than our own, assuming, that is, it mirrors anything at all. Indeed, if contemporary art were a mirror in which we could discern the zeitgeist, the overall culture would have a lot going for it. The art doesn’t tell us that it is not morning in America, and we don’t need it to. We know that by watching the evening news.

Bellini’s Portraits of the Ottoman Sultan

In the Guardian, Orhan Pamuk writes about Gentile Bellini’s portraits of the Ottoman Sultan Mehmed II.

Bellini_mehmed1

…Gentile Bellini’s “voyage east” and the 18 months he spent in Istanbul as “cultural ambassador” that is the subject of the small but rich exhibition at the National Gallery. Though it includes many other paintings and drawings by Bellini and his workshop, as well as medals and various other objects that show the eastern and western influences of the day, the centrepiece of the exhibition is, of course, Gentile Bellini’s oil portrait of Mehmed the Conqueror. The portrait has spawned so many copies, variations and adaptations, and the reproductions made from these assorted images have gone on to adorn so many textbooks, book covers, newspapers, posters, bank notes, stamps, educational posters and comic books, that there cannot be a literate Turk who has not seen it hundreds if not thousands of times.

No other sultan from the golden age of the Ottoman empire, not even Suleyman the Magnificent, has a portrait like this one. With its realism, its simple composition, and the perfectly shaded arch giving him the aura of a victorious sultan, it is not only the portrait of Mehmed II, but the icon of an Ottoman sultan, just as the famous poster of Che Guevara is the icon of a revolutionary. At the same time, the highly worked details – the marked protrusion of the upper lip, the drooping eyelids, the fine feminine eyebrows and, most important, the thin, long, hooked nose – make this a portrait of a singular individual who is none the less not very different from the citizens one sees in the crowded streets of Istanbul today. The most famous distinguishing feature is that Ottoman nose, the trademark of a dynasty in a culture without a blood aristocracy.

Darcy’s Secret Society

Darcy James Argue’s Secret Society has been receiving more attention, in Time Out New York and The New York Times. Having seen them a few times, all I can say is if you haven’t, you should. You can finding listings of their next gigs, and recordings of some of the pieces, over at the Secret Society blog. From the Times:

DARCY JAMES ARGUE’S SECRET SOCIETY (Tuesday [April 18th]) As the name implies, this 18-piece big band is calibrated for maximum intrigue, with a sound that suggests Steve Reich minimalism as well as orchestral jazz in the lineage of Bob Brookmeyer (one of Mr. Argue’s mentors). The ranks of the band include such hale improvisers as the trumpeter Ingrid Jensen, the tenor saxophonist Donny McCaslin and the trombonist Alan Ferber. 10 p.m., Bowery Poetry Club, 308 Bowery, between Houston and Bleecker Streets, Lower East Side, (212) 614-0505; cover, $12. (Nate Chinen)

big bang

Cosmologists like big ideas. After all, their chosen subject is nothing less than the universe itself – how did it start, what is it made of and how will it end? Some of these big ideas have catchy names, such as the big bang, but others are more prosaic. However, these names can be misleading. Cosmic inflation, for example, might sound dull, but it is actually one of the boldest ideas in the history of physics and astronomy.

In a nutshell, inflation is the term used to describe an extremely short period of turbocharged expansion that happened immediately after the big bang. Moreover, after years of trying, astrophysicists have just reported the first experimental evidence that inflation actually happened. Charles Bennett of Johns Hopkins University in Baltimore and co-workers made the breakthrough after a painstaking analysis of three years of data from the WMAP satellite.

more from The Guardian here.

sorta realism

Kuspit451s

F. Scott Hess appears in many of his paintings — perhaps most provocatively in The Painter and His Daughter (2003) and Riverbed (2004) — suggesting that they are all self-portraits in principle. In Time, Mind and Fate (all 2005), the mature, hard-eyed Hess appears in softer surrogate form, without his Mephistophelean goatee and moustache, as though he were a callow youth just starting his career as a painter (the clean-shaven painter pictured is, in fact, Hess’ student) and thus innocent as to the ways of the art world rather than a seasoned veteran of its wars, holding his own in it. Hess has described himself as a “reluctant realist,” and realism is not a fashionable position, suggesting that Hess casts himself as an alienated outsider, all the more so because his realism is grounded in Old Master craft and intelligence. And, even more subtly, in an Old Master formalism — more complicated, devious and expressively insinuating than modernist formalism — that informs the narratives which mask it. For Hess is as much a formalist — and a not-so-reluctant one, as I hope to show — as a realist. Like Old Master realism, Hess’ realism speaks in symbolic tongues and formal paradoxes, which is not exactly to speak plainly.

more from Donald Kuspit at artnet magazine here.

population bomb?

For decades, the world has been haunted by ominous and recurrent reports of impending demographic doom. In 1968, Paul Ehrlich’s neo-Malthusian manifesto, The Population Bomb, predicted mass starvation in the 1970s and ’80s. The Limits to Growth, published by the global think tank Club of Rome in 1972, portrayed a computer-model apocalypse of overpopulation. The demographic doom-saying in authoritative and influential circles has steadily continued: from the Carter administration’s grim Global 2000 study in 1980 to the 1992 vision of eco-disaster in Al Gore’s Earth in the Balance to practically any recent publication or pronouncement by the United Nations Population Fund (UNFPA).

What is perhaps most remarkable about the incessant stream of dire—and consistently wrong—predictions of global demographic overshoot is the public’s apparently insatiable demand for it. Unlike the villagers in the fable about the boy who cried wolf, educated American consumers always seem to have the time, the money, and the credulity to pay to hear one more time that we are just about to run out of everything, thanks to population growth. The Population Bomb and the Club of Rome’s disaster tale both sold millions of copies. More recently, journalist Robert D. Kaplan created a stir by trumpeting “the coming anarchy” in a 2000 book of the same name, warning that a combination of demographic and environmental crises was creating world-threatening political maelstroms in a variety of developing countries. Why, of all people, do Americans—who fancy themselves the world’s pragmatic problems-solvers—seem to betray a predilection for such obviously dramatic and unproved visions of the future?

more from The Wilson Quarterly here.

old radicals

184467052x01_scmzzzzzzz_

Taken together, the works selected by Verso embody the creation and development of a dissenting tradition that set out to question and subvert the established order. Yet while this was once the prin-cipal strength of these thinkers, it has become something of an Achilles heel. A collective reading exposes all that has gone wrong with radical thought in the 20th century. Traditions, and intellectual traditions in particular, rapidly ossify and degenerate into obscurantism. They have to be constantly refreshed, renovated and reinvented. It is time that radical thought broke out of its confining structures. It is time to put Adorno’s anxieties about mass culture and media to rest; to move forward from Baudrillard’s and Derrida’s postmodern relativism to some notion of viable social truth; and for criticism to stop messing about with signs and signifiers, and instead confront the increasing tendency of power towards absolutism.

more from the New Statesman here.

The Agony of Defeat

From Science:Swimmr

The summer Olympics only come around every four years, and for elite athletes vying for a spot on their national teams, failure to qualify can be crushing. Now, researchers have taken a look at how the brain deals with dashed Olympic dreams. Their findings hint at a possible explanation for why athletes who’ve suffered tough losses often have a hard time getting back on top of their game.

Not surprisingly, the swimmers rated their own videos more wrenching to watch. And their brains showed signs of their emotional pain, with heightened activity in the parahippocampus and other emotion-related areas that have been implicated in depression. (None of the swimmers had a prior history of depression). Moreover, the premotor cortex–a region that plans actions such as the arm and body movements needed to swim–seemed to be inhibited when the swimmers watched their bad race, the researchers reported here 9 April at a meeting of the Cognitive Neuroscience Society. To Davis, this suggests that bummed out athletes might perform poorly because their premotor cortex isn’t sufficiently fired up.

More here.

The Man Behind Bovary

From The New York Times:Bovary_1

Novelists should thank Gustave Flaubert the way poets thank spring: it begins again with him. He is the originator of the modern novel. Take the following passage, in which Frédéric Moreau, the hero of “Sentimental Education,” wanders through the Latin Quarter, alive to the sights and sounds of Paris: “At the back of deserted cafes, women behind the bars yawned between their untouched bottles; the newspapers lay unopened on the reading-room tables; in the laundresses’ workshops the washing quivered in the warm draughts. Every now and then he stopped at a bookseller’s stall; an omnibus, coming down the street and grazing the pavement, made him turn round; and when he reached the Luxembourg he retraced his steps.” This was published in 1869, but might have appeared in 1969; many, perhaps most, novelists still sound essentially the same. Flaubert scans the streets indifferently, it seems, like a camera. Just as when we watch a film we no longer notice what has been excluded, so we no longer notice what Flaubert chooses not to notice. And we no longer notice that what he has selected is not of course casually scanned but quite savagely chosen, that each detail is almost frozen in its gel of chosenness. How superb and magnificently isolate the details are — the women yawning, the unopened newspapers, the washing quivering in the warm air. Flaubert is the greatest exponent of a technique that is essential to realist narration: the confusing of the habitual with the dynamic.

More here.