Monday, November 25, 2013
Through A Printer Darkly
by James McGirk
James McGirk works as a literary journalist and is a contributing analyst to an online think tank. The following is an imagined itinerary for a tourist vacation twenty years in the future.
Seven days in the PRINTERZONE
June 20, 2033-June 28, 2033
A quick suborbital hop to Iceland courtesy of Virgin Galactic and then it’s all aboard the ScholarShip, a luxurious three-mast schooner powered by that most ecologically palatable of sources: the wind.
Weather-permitting you and twenty of your fellow alumni will set sail for the Printerzone. (The North and Norwegian Seas can be temperamental: in the event of heavy weather we revert to backup biodiesel power.) Our destination has been recognized by UNESCO as a World Heritage Site: it is both a glimpse at what our future might become should government regulation of printers come to an end, and a fantasy of life free from credit and ubiquitous surveillance. Together we’ll spend a week immersed in this unique community, on board an oilrig in international waters, using three-dimensional additive printing to meet our every need.
Joining us on this adventure will be Prof. Orianna Braum, an associate professor of Maker Culture at Stanford University; Alan Reasor, a forty-year veteran of the additive printing industry; and a young man who prefers to refer to himself by displaying a small silver plastic snowflake in his palm.
ITINERARY - DAY ONE
A colorful day spent traversing the Norwegian and North Seas… sublime marine grays and blues stirred by the bracing sea breeze. Keep your eyes peeled for pods of chirping Minke whales! Many are 100 percent natural.
Breakfast and lunch will be served onboard The ScholarShip by our chef Matthias Spork. Selections include: printed cereals and pastas, catch-of-the-day and a refreshing sorbet spatter-printed by his wife, renowned pastry chef Rebecca Spork.
Prof. Braum and Mr. Reasor will debate: Has Three-Dimensional Printing failed its Promise? Reasor will argue that in most instances economies of scale and the cost of raw materials make conventional manufacturing a more cost-effective solution than 3D printing. Prof. Braum will counter, describing industries that have been radically reshaped by printing—prosthetics and dentistry, bespoke suiting and fashion, at-home robotics and auto-repair—and suggest instead that government safety regulation and restrictive intellectual property licenses have done more to stifle innovation than costs. There will be time for questions afterwards. And then a brief demonstration of piezoelectric substrates: printed materials that respond to the human touch.
Following a hearty and delicious dinner prepared by the Sporks, we invite you for hot toddy and outdoor stargazing with our First Mate. The Arctic winds can be fierce at night, so you have the option of lighting the hearth in your cabin, and viewing a very special Skype broadcast—The Pink Printer’s Naughty Apprentice—which outlines in a most whimsical and titillating way some of the more adult uses of the three-dimensional printer.
(Please note that cabins containing occupants below the age of consent in their country of residence will not receive this broadcast.)
Drop Anchor in the Printerzone
After a hot breakfast ladled out by the Sporks, join your shipmates on deck for an approach unlike anywhere else on earth: a faint glimmer on the horizon gathers in size and sprouts shapes and colors, until the magnificent muddle that is the Printerzone fills our entire field of vision. Crumpled wrapping paper on stilts, a wag once said. Squint at this glorious mass, and beneath the colorful sprays of plastic and the pieces of flotsam and jetsam the residents have creatively incorporated into their homes, you just might make out the original concrete and steel beneath.
Your daily allowance of printer substrate will be issued to you in bulk so that you may trade it for trinkets. A rope ladder will be lowered from above. One at a time you will be hoisted to the Zone. There, our guide, the man who identifies himself with the silver snowflake (henceforth referred to as [*]) shall greet us. He is an interesting specimen. Ask of him what you will. The tour begins at The Workshop, a vast, enclosed “maker space” where P’Zoners (as they call themselves) exchange goods, plans for new designs and information. Barter your substrate for unique souvenirs. Take a class in creation. Then enjoy a sandwich lunch carefully selected by the Sporks. Food may also be bartered with the natives.
After lunch you may explore the Zone at your leisure or enjoy another spirited debate between Reasor and Braum. Printerzone: Model City or Goofy Aberration? Dinner shall be served in the Workshop, which at night transforms into The Wild Rumpus. Guests in peak physical condition may want to join the carousing. (N.B. Beware of custom-printed entheogens and other libations, which, while they may be legal in the Printerzone, are not necessarily safe.)
Fresh croissants and a mug of coffee are the perfect way to begin a crisp Printetrzone morning! Daring types may wish to join [*] and don a protective suit printed from the city’s custom printers, and sink beneath the waves for a romp on the seafloor and a look at how the city has evolved below the waterline. Printerzone’s silver suits are said to work as well in orbit as they do submerged beneath the waves. You may examine copies of a Vogue pictorial featuring the suits.
For those who prefer a more relaxed pace in the morning, there will be a bicycle tour of the Zone’s famous hydroponic orchid nursery, its orphanage and its medical clinics (notable, for, among other things, performing the first artificial face transplant). There will also be a chance to examine the city’s recycling system up close as it transforms unwanted printer output and even sewage and brine into the raw materials for printing. No stinky smells we promise!
(All printed foods served aboard the ScholarShip are guaranteed to be free from precursor materials that were made from human waste or potential allergens.)
For lunch, if you’re ready for it, be prepared to break some taboos. Guided by [*], the Sporks, rabbis, halal butchers, vegan chefs, and a number of other experts, you will be given a unique opportunity to eat—among otherwise offensive offerings—a perfect facsimile of human flesh, pork, dolphin steak, non-toxic fugu flesh, endangered sea turtle, and even taste the world’s most potent toxins in perfect moral comfort and safety. Less adventurous offerings will also be available for the squeamish.
During lunch, Braum and Reasor will sound off on the subject of: Whether Full Employment is Possible in a post-3DP World. Braum says printing in three dimensions will kill off the middlemen who camp out in many employment categories (the warehouse managers, the marketing men…); Reasor agrees, but thinks the unfettered labor will be absorbed by innovative new industries. There will be time for questions. Coffee too.
After lunch there will be a demonstration of one of the most potent technologies to emerge from three-dimensional printing: the cheap invisibility cloak. Then you will be joined by some of the city’s most outrageous tailors, haberdashers, wig makers, and costume outfitters. Design a more colorful, eccentric version of yourself and then top off your creation with a freshly printed invisibility cloak, so that you might attend the night’s festivities in absolute comfort. You need only reveal yourself to those you want to. Buffet dinner. Brandy against the chill.
(N.B. Printerzone security forces are equipped with night-vision goggles, so rest assured that you will be safe, but don’t get any antisocial ideas. There are some rules to abide by!)
Pondering the Printerzone
On our fourth day, after a healthy, all-natural breakfast lovingly prepared by the Sporks on the ScholarShip, we delve into the Printerzone’s more pensive side. [*] will lead us on a tour of the Million Memorials, the serene necropolis where the city’s mourners print chalky likenesses of friends and family they’ve lost, and missing objects and abstractions too. A quiet, haunting place. After a pleasing serenade by the P’Zone wailers, we picnic among the monuments, and hear [*]’s own story of loss—his young bride who slipped over the railing during a photo session and drowned in the ocean— and gaze at the spun plastic residue of a brief but happy relationship and afterwards, stroll back to The Workshop for a chance to barter for more amusements.
The subject of the day’s lecture (delivered, of course by Braum and Reasor) will be: Three Dimensional Printing in the Developing World. Printing won’t be the panacea we think it will because the developing world lacks the infrastructure to sustain itself; but surely the availability of items that would otherwise have been unavailable is valuable—but what about the cottage industries that would be eradicated by printing, wouldn’t that snuff out any printing-related development? Drink during the lecture if you like. Gaze longingly at potential mates if you wish to. This is a pleasure cruise.
After a brief question and answer session, a fittingly austere supper will be served, and [*] will introduce us to a non-profit initiative sponsored by the Printerzone: a crisis response team that will race to trouble spots and, without the needless hassle of lines of communication and supply, be able to provide surgical equipment, medicines and shelter at a fraction of the cost… cost? Yes, even this barter-driven economy is soliciting funds. Contribute what you will. The city’s orphans hand out orchids.
Snack before the Wild Rumpus. Serenade. Custom sex surrogates printed for an additional fee. (Please: No printing of lecturers, crewmembers, fellow travelers without their expressed permission, no skin prints using DNA within a 15 percent match of your own.)
At home in the Printerzone
Many of travelers wake on their fifth day beside a grim memory, manifest in the form of slightly abused piezoelectric plastic. You may find it cathartic to batter your unwanted surrogate to pieces, or, if you are the showy sort—enter the surrogate into the ring for gladiatorial combat. The festivities begin with a squabble between Braum and Reasor’s creations (one wonders at the tension between them), followed by a battle royal, and a moving speech by [*] about whether or not a surrogate has a soul. Each participant will be allowed to download a copy of Do Androids Dream of Electric Sheep for later review.
By now you’ve spent nearly a week looking up at the frills wrapped around the upper decks of the rig. Perhaps you’ve wondered what the lives of the residents are like beyond the Wild Rumpus or the Workshop floor. Today you’ll enjoy an intimate glance at their living quarters.
Some might find this disturbing. There are children here, you might say, how could one live like this? But they’re hardly cut off; well, maybe they are cut off from nature and history and dry land but not the ‘net. See the data goggles they wear? The tykes and pubers who strut about the Zone have come to see the boundary between what is virtual and what is not as a thing much more permeable than you or I.
Here the Internet is inside out. People print virtual things. Shudder at the home robots with their suction cup attachments. Are they vacuum cleaners or sexual abominations or both? Much of the home décor won’t make sense unless you’re jacked into the ’net. Too prone to data dropsy to peer through a lens? Ask yourself why this trip appealed to you in this first place, but fear not—there are gentle entheogens that replicate the experience of data being blazed onto your eyeballs.
Nighttime. Rumpus again. Dance and flail until you feel yourself dissolve into the communal flesh. The Sporks have taken the day off. Truth be told they’re disgusted with three-dimensional printing and what it means for their profession. Can you blame them? Who cares, you aren’t hungry. From perched up high, the Zone looks terraced and circular like a medieval etching of The Inferno. The Rumpus looks like the writhing of the damned. You think you see Braum and Reasor embrace. [*] sits beside you and tells you his given name was Virgil. Has he been drugging you?
Beyond the Printerzone
Someone wakes you up by firing a pistol in the air. That’s right, there are a lot of weapons here. This is a polite society. Ugh, the sunlight streaming into your eyes is sheer agony. Your neurons are crying out. Caffeine! Dopamine! Serotonin! You wobble out on deck. The Sporks are back. Thank God the Sporks are back. They pour you a mug of coffee. They cut you a grapefruit. Crackling bacon, the smell of bread baking.
[*] won’t look you in the eye, the sweaty creep.
Above you the colorful plastic printed houses look chintzy in the light. They hoist you up. Peek below. The ScholarShip is an oasis of sanity and earthtones. Everything else is Technicolor Burp. Can you really face another day of this? The medic gives you something for your throbbing head. A party assembles. Wrapped sandwiches for lunch and shot-glasses of Astronaut Ice Cream. A hardhat. That silver protective garb you’ll have to peel off afterwards. The place stinks of kerosene (that’s jet fuel someone will say.) There are men from NASA, and men from the Air Force, and men with helmets that look like they’re made entirely from mirrorshades. Cyclopses. You want to leave. There’s a faint but unmistakable rumble.
Reasor and Braum waddle to the front of your party. Another debate: Space Exploration is Three-Dimensional Printing’s Killer App. This time they both agree. Reasor thinks the way to reach for the stars is to print a massive cable and haul ourselves up. Braum says that’s great, but what’s better is that you can go anywhere in space and print anything you could possibly need. You can beam plans to the spaceship, plans for things that weren’t invented when the ship took off. Applause. Time for questions. Cups of coffee. Cookies.
Wonder what if printers were used to print infinite printers?
Clutch your mug. Look around. The top level is cold and metallic. Limp suits hang waiting, rows of silver helmets that look like Belgian Glass globes wink in the setting sun. Rockets: fins, nose caps, nozzles, streamlined bellies, lie, being assembled from spools of plastic. Dinner is splendid and sober. You remember little of it. There were candles. An ant walked across the table.
Tonight there is no Wild Rumpus. You sleep on the rig, beneath the stars but protected by an infinitesimal layer of plastic. A storm blows in. Electricity rips the Arctic sky. Rain pounds plastic but never touches you. You are woken by a helmeted Cyclops: “Some visitors decide never to leave,” he says, extending a gloved hand. It’s silver. “We’ll nourish you.” Behind the smooth surface you can just make out the blurry face of [*]
Wake to the smell of Sporks’ cooking. A printed snowflake has been placed beside you. Visitors may opt to extend their stay. Or leave and never, ever come back.
Monday, November 18, 2013
Homo Erectus, or I Married a Ham
by Carol A. Westbrook
My husband loves big erections. Don't get me wrong, I'm not speaking here about Viagra, I'm talking about tall towers made of metal, long wires strung high in the sky, and tall antennas protruding from car roofs. He loves anything that broadcasts or receives those elusive radio waves, the bigger the better. That is because he is a ham, also known as an amateur radio enthusiast, and all hams love antennas.
Amateur radio has been around since the early 1900's, shortly after Marconi's first transatlantic wireless transmission in 1901. Initially, radio amateurs communicated using Morse code, as did commercial radiotelegraphy, but voice transmission quickly gained in popularity. In order to broadcast on the ham radio frequencies, hams must obtain an amateur radio license from the FCC, and a unique call sign, their ham "name." Proficiency in Morse code was required in order to obtain an amateur radio license, but this requirement was finally dropped in 2003, which opened up the field to many more interested radio amateurs, my husband being one of them. As a result, the hobby is becoming popular again. There are local clubs to join, as well as national get-togethers called "hamfests" where there are lectures, demonstrations, equipment swap-meets, and licensing exams.
What do hams do? They communicate by radio. They use everything from a battery-powered hand-held transmitter to a massive collection of specialized radio equipment located in a corner of their home or garage, which they call their "ham shack." (See picture of my husband's ham shack, above, in his library). They talk to other ham radio operators, and participate in conversations that may be local or span the globe, depending on the radio wavelength, the power of their transmitter, and their antenna. And they erect large antennas, perhaps on an outside tower or the roof of their home.
Like Marconi, hams learn early on that it's relatively easy to send out a radio signal, but the distance it travels depends as much on the size and configuration of the antenna as it does on the signal strength. There is an art to constructing an antenna, and hams spend a great deal of effort on it. That is why hams are fascinated by antennas. They are the quintessential "homo erectus."
My husband's fascination was fueled by his boyhood days. In the 1950's he felt isolated from the outside world because his family's radio and TV could only receive a few stations, living as they did in an a valley surrounded by the Pocono Mountains. He learned that he could receive more stations by stringing long wires throughout the house, or on the roof -- creating his own makeshift antennas. This led to an engineering degree, an interest in telecommunications, and a ham radio license.
Our houses are festooned with antennas. We have long wires strung from roof to garage, a small tower on the hillside, four large parabolic dishes, from 6 to 11 feet in diameter, that receive signals from transmitting satellites... but that's another story. We even have a stealth antenna in our garden which, to the casual observer, appears to be just another garden ornament, nestled among the roses. (See picture) Unlike other "ham widows" I don't mind these antennas -- they are certainly conversation pieces. I do not have a ham license--I didn't past the exam, but then again I didn't study for it. But I often go along with my husband to hamfests, including the famous Dayton Hamvention, which takes place every May.
What is so appealing about ham radio? Why spend your time and money to buy archaic equipment and erect antennas and mess up your house -- when you can just call on your cell or Skype your friend? The answer is simple -- because you can. As a hobbyist, you cannot easily make a micro chip, or build a cell phone, or create your own internet, but you can assemble your own equipment and broadcast your own voice, around the world. Just like Marconi! What a high! What a sense of empowerment! And ham radio is a great hobby for youngsters who want to learn about the electrical and mechanical world, and enjoy the challenge of "getting out of the valley" using their own ingenuity and design. If you would like to learn more, contact the national association for amateur radio, the American Radio Relay League, to learn how to get involved, or visit their headquarters and museum at 225 Main Street Newington, CT 06111-1494 USA. You might get hooked, too.
Monday, October 14, 2013
The Uses and Disadvantages of History for Ecological Restoration
Context: One of the newer biological conservation strategies, ecological restoration, attempts to reverse the degradation of lands set aside for conservation purposes by reinstating, as closely as possible, the species and environmental conditions that existed before recent and large scale disturbances by human activities. A newly emerging framework within restoration ecology - the novel ecosystem paradigm - points out that with global change we are moving into an era for which there is no historical analogue. As a consequence land must be managed without excessive regard for the past which can no longer serve as our guide. This has generated a lot of controversy within the field. I was asked by Irish journalist Paddy Woodworth to speak on a panel on “The historical reference system: critical appraisal of a cornerstone concept in restoration ecology” at a conference of the Society for Ecological Restoration held in Madison Oct 6 -11th 2013. In recent articles and in his new book “Our Once and Future Planet: Restoring the World in the Climate Change Century” Woodworth had been critical of the novel ecosystem paradigm wondering if it does not undermine the case for restoration. I had not realized how controversial the topic had become. Tensions at the conference were running high, and the room in which this panel convened was over capacity with dozens turned away. What follows is the outline of my remark at this session.
On first glance the work of Friedrich Nietzsche (1844–1900), the German philosopher, might not seem especially helpful for restoration ecologists or indeed for anyone contemplating our relationship with the natural world. After all, his work supposedly challenges the foundations of Christianity and traditional morality. Nietzsche’s famous locutions concerning the “death of God” and his extensive discussions of nihilism should, however, be seen as his diagnosis rather than his cure. For Nietzsche our real cultural task is to overcome the annihilation of traditional morality, replacing it with something more life-affirming. The failure of our traditional precepts of value stem from the fact these express what Nietzsche calls the ascetic ideal. This ideal measures the appropriateness of human actions against edicts coming from beyond our natural and earth-bound life. The highest human values, as we traditionally assess them, came from a denial of our natural selves. Nature, in turn, is regarded as having no intrinsic value.
Thus Nietzsche even when he wrote in areas seemingly distant from traditional environmental concerns has useful things to say to us environmentalists. At times, in fact, his aphorisms are those of a poetic naturalist. In The Wanderer and His Shadow (1880, collected in Human, All too Human) he wrote “One has still to be as close to flowers, the grass and the butterflies as is a child, who is not so very much bigger that they are. We adults, on the other hand, have grown up high above them and have to condescend to them; I believe the grass hates us when we confess our love for it.” This is not, of course, to claim that Nietzsche is a traditional naturalist. His concerns are primarily about the thriving of human life, though in this he seems less like a traditional wilderness defender and closer to a contemporary sustainability advocate who seeks to locate a promising future for humans while simultaneously solving environmental problems.
A central device in Nietzsche’s work is a type of thought experiment about eternal recurrence of the same: the thought of a pure and perpetual restoration. An early use of the thought is in The Gay Science (1882). There he wrote: “This life, as you now live it and have lived it you will have to live once more and innumerable times again; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything small or great in your life must return to you, all in the same succession and sequence—even this spider and this moonlight between the trees and even this moment and I myself.” There are those — do you count yourself among them? — who might welcome this. For many of us, however, the prospect of the same sequence playing over and over again would crush us.
In some ways eternal return asks us how much history we can tolerate. In what circumstances does embracing the past testify to our strength: the ways we are disposed to ourselves and to life? And if we cannot take on the entire weight of history, how much of it are we prepared to take on: a little, a lot? The question of what to do with history is considered by Nietzsche in an 1874 essay entitled On the Uses and Disadvantages of History for Life: the essay from which I take my title. In it Nietzsche decries a style of knowledge acquisition for the sake of knowledge alone. This desiccated strategy ends up sapping our vital impulses. But it doesn’t have to be this way. Nietzsche, memorably, wrote that history can be related to the life of a person in three ways: “it pertains to him as a being who acts and strives, as a being who preserves and reveres, as a being who suffers and seeks deliverance.” These are Nietzsche’s “three species” of history: the monumental, the antiquarian and the critical species.
Restoration is always a game that we play with time. Ecology has a history of being overly confident about that which is genuinely perplexing to other disciplines, namely time. There is a long standing suspicion among philosophers that time, as such, is meaningless. The British philosopher John McTaggart (1866 –1925) famously pronounced the unreality of time. The argument, briefly, is that since every event is both past and future and thus there can be no coherent ordering of events. The observation that an event is not simultaneously past and future relies itself on the ordering that it is trying to explain, creating a vicious circle. Restorationists, however, have a refreshing lack of interest in abstractions such of these. We are concerned, however, with the degree to which we should incorporate the past into our plans for the future — this is the essence of debates about the use of historic reference systems.
The connection between restoration and history is obviously the case for classical restoration defined by the SER International Primer on Ecological Restoration as “the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed.” All those “re” and “de” words etymologically reveal their indebtedness to the past. The origins of the prefix “re”, for instance, refers to the original Latin, meaning ‘back’ or ‘backwards’. Ecological restorationist’s concern for the past is not, of course, necessarily about the past for for its own sake, but on behalf of a suite of reasons connected with our direct human needs as well as in discharging of our ethical obligations to the biosphere. As Dave Egan and Evelyn A Howell phrased it in The Historical Ecology Handbook: A Restorationist's Guide To Reference Ecosystems (2001): “A fundamental aspect of ecosystem restoration is learning how to rediscover the past and bring it forward into the present – to determine what needs to be restored, why it was lost, and how to make it live again.” In William Jordan III’s strict definition of “ecocentric restoration” — “restoration focused on the literal re-creation of previously existing ecosystem, including not just some but all its parts and processes” — restoration, this seemingly impossible grappling with the past generates a broad range of values, some of which we will never get by ignoring the past. In Making Nature Whole: A History of Ecological Restoration (2011) Jordan wrote: “The motives behind this new and some ways odd enterprise [of ecocentric restoration] were complicated: a mixture of curiosity, scientific, historic, and aesthetic interest, nostalgia, and respect for the old ecosystems, together with the idea that the old ecosystems are ecologically privileged assemblages of organisms, endowed with distinctive qualities of stability, beauty, and self organizing capacity, and so might be useful as models for human habitat.” Jordan’s work invites us to deal with the full blast of history, to endure it for the sake of the “classic ecosystem” which otherwise won’t survive, and by enduring to understand better our current relationship with the rest of the natural world. In Jordan’s work, failure is an option — sometimes indeed, failure may be the very point.
Let us engage in a little Nietzschean thought experiment of our own. If an ecological manager from today was transported to the future and shown three sites: one minimally influenced by human activity (assuming that such a thing exists), one classically restored, and one that had been classified at the time of the manager’s departure as a novel ecosystem, the manager would not be able to distinguish based solely upon an inspection of their respective ecological properties one category of site from the other with certainty.
Contemporary ecologists have for generations abandoned any expectations that natural systems, even those uninfluenced by human activity, are static. In the absence of human intervention, ecosystems will change, according to some accounts at least in episodic ways, as one ephemerally stable condition gives way to the next. Each stage will be characterized by species combinations that are largely historically unprecedented, as paleoecologists have documented for systems since the Quaternary and even before. Attempts, therefore, to predict the future of “natural” communities are prone to error. The future is indeterminate. In this ecologists agree with an emerging philosophical consensus that the past is realer than the future, and that the present moment is realist of all.
Nor will the future condition of a restored system be readily identifiable to today’s manager. If our time-traveler has with her the SER Primer on Restoration Ecology, an inspection of the expected properties listed there for identifying a restored system would confirm that this difficulty must be the case. Identifying which species of a future assemblage are indigenous — in restored systems the majority of species should be natives according to our contemporary standards — becomes more difficult the further into the future we project. Over sufficiently long time scales, evolutionary forces come into more pronounced play. Additionally, it is conceivable that species not at present within a biogeographic range of a system may become so in due course without human intervention. Thus naturally altered vegetation patterns may not easily distinguished from those caused by deliberate or inadvertent human introductions. Ultimately, the difficulty that our time-traveler will have in identifying today’s restoration efforts projected into the future arises because current restoration thinking acknowledges, as it should, that communities are dynamic, and sound contemporary management practice should not seek to curtail this dynamism.
A novel system, is defined by Hobbs, Higgs and Hall in Novel Ecosystems: Intervening in the New Ecological World Order (2013) as “a system of abiotic, biotic and social components that, by virtue of human influence, differ from those that prevailed historically, having a tendency to self-organize and manifest novel qualities without intensive human management.” A novel system that is currently under management no matter how minimal (this absence of intensive management being a defining aspect of novel systems), would likewise be difficult to distinguish from sites under restoration management or merely undergoing long-term successional change. All sites are subject to the vagaries of dynamic but unpredictable change. One manager’s failed restoration project, or natural successional system, is another’s future novel system.
At first glance one might be inclined to say that the novel ecosystem is an ahistorical concept: history in a deficient-mode: history being conspicuous by its conscious absence. But there is more history involved in the identification of a novel system than might at first be obvious. The identification of novelty depends upon historical analysis. A determination is made by a historically-informed person, that these systems are not classically restorable and have certain emergent properties of value and are therefore worth studying, conserving, and managing, albeit non-intensively. Although, as we noted, novel ecosystems are defined by their lack of need for intensive management, nonetheless when a novel system is providing conservation services and generally functions in a manner that is pleasing then a management regime may be instituted. As soon as this management is enacted the novel ecosystem is thereby governed by a historical reference system even if the historical moment being referred to is but a few moments in the past.
The conclusion that these systems cannot be identified without context should not be interpreted nihilistically, nor should it demotivate us. The point I am making here is that history matters regardless of which paradigm of restoration prevails. The engagement with history can be done objectively but it generates important subjective values. That the novel ecosystem is enmeshed in history is acknowledged by its proponents. Richard Hobbs and colleagues wrote “there is a gravitational pull in our discussions towards historical conditions. In acknowledging novel ecosystems, it is plain that this gravitational pull is sometimes very weak; it remains however, if only as a reminder that the past matters and has mattered.”
It is turtles all the way down, and those turtles are history!
I want to give the last words to Nietzsche. In his view, stretched between vast forgetfulness and the stultifying horrors of forgetting nothing, is a level of reckoning with history that may be helpful for life and restoration. Though as Nietzsche wrote “Forgetting is essential to action of any kind”, nevertheless restoration — classic or associated with novel system management, is always about history, and must therefore reckon the costs of both deliberate but empowering forgetfulness and value-creating but expensive commemoration. Cows, Nietzsche wrote “do not know the difference between yesterday and today …and thus [are] neither melancholy or bored.” The downside, one supposes, is that neither do they know joy nor beauty, and when all is said and down, they are, after all, cattle! An oversaturation with history, on the other hand, can be inimical to life. Nietzsche lists many reasons why too much history can be dangerous (I mention only the one that most pertains to us): it implants a belief, harmful at any time, in the old age of mankind, the belief that one if a latecomer and epigone. The past swells behind us and though it is tempting to think that everything was so much better last week, last year, in previous ages, nonetheless it would be deadening to think of ourselves as anything but a vernal species with a promising future ahead of us. In some case we draw strength and value from total recall, but there are times we must know when to forget. Lord, grant us to wisdom to discern when it’s best to remember and when best to forget.
Monday, September 30, 2013
Food and Power: An Interview with Rachel Laudan
All photos courtesy of Rachel Laudan
Rachel Laudan is the prize-winning author of The Food of Paradise: Exploring Hawaii’s Culinary Heritage, and a co-editor of the Oxford Companion to the History of Modern Science. In this interview, Rachel and I talk about her new book, Cuisine and Empire: Cooking in World History, and her transition from historian and philosopher of science to historian of food.
Rachel Laudan: I can remember when there was no such discipline as history of science! In fact, moving to history of food was a breeze. After all, the making of food from plant and animal raw materials is one of our oldest technologies, quite likely the oldest, and it continues to be one of the most important. The astonishing transformations that occur when, for example, a grain becomes bread or beer, or (later) perishable sugar cane juice becomes seemingly-eternal sugar have always intrigued thinkers from the earliest philosophers to the alchemists to modern chemists. And the making of cuisines is shaped by philosophical ideas about the state, about virtue, and about growth, life, and death.
A lot of food writing is about how we feel about food, particularly about the good feelings that food induces. I'm more interested in how we think about food. In fact, I put culinary philosophy at the center of my book. Our culinary philosophy is the bridge between food and culture, between what we eat and how we relate to the natural world, including our bodies, to the social world, and to the gods, or to morality.EH: Your earlier book, The Food of Paradise, necessarily dealt with food politics and food history. So many cultures were blended into local food in Hawaii. I treasure that book -- almost a miniature of what you’re doing in Cuisine and Empire.
RL: Well, thank you. It came as a surprise to me that I had a subject for a book-length treatment of something to do with food or cooking -- as interested in the subject as I certainly was. The only genre I knew was the cookbook, and I am not cut out to write recipes.The book was prompted by a move to teach at the University of Hawaii in the mid 1980s. I went reluctantly, convinced by the tourist propaganda that the resources of the islands consisted of little more than sandy beaches and grass-skirted dancers doing the hula.
I couldn't have been more wrong. These tiny islands, the most remote inhabited land on earth, have extraordinarily various peoples and environments. They were an extraordinary laboratory for observing the encounter of three radically different cuisines inspired by totally different culinary philosophies.
EH: It wasn’t all that long ago -- going on 18 years -- but you were a pioneer in the approach you took. It was history, not a compendium of anecdotes. And it was a treatment of culinary philosophies. Was there anything to tell you it would be so well received?
RL: Not at all. Mainland publishers were interested only in a book with exotic tropical recipes. I wanted to use the recipes as illustrations of how three cuisines were merged into a fusion cuisine called Local Food. Readers were welcome to cook from them, but that wasn’t their point.The University of Hawaii Press, after some anguishing about whether a mainlander could write a book about the politically touchy subject of foods in Hawaii, took the manuscript. So I was bowled over when it won the Jane Grigson/Julia Child prize of the International Association of Culinary Professionals.
EH: Any publisher might have had more confidence, originally, in your cultural sensitivity, if they’d seen how many cultures you had by then participated in. And the list has grown. You’ve really gotten around.
RL: I have had the luck to have been successively immersed in four distinct cultures: those of England, the United States mainland, Hawaii, and Mexico. Growing up in Britain, I ate the way that many foodies today dream about: local food, entirely home cooked, raw milk from the dairy, home preserved produce from the vegetable garden. I never saw the inside of a restaurant until my teens. When I was 18, before I went to college, I spent a year teaching in one of the first girls' high schools in Nigeria, something that I later realized taught me a lot about the food of that part of the world. In addition, I have lived, shopped and cooked for periods of months in France, Germany, Spain, Australia, and Argentina.
EH: Were you always teaching?
RL: Not always. My husband Larry Laudan and I left academia of our own free will when we were in our 50s, thinking it would be exciting to try something different. We thought lots of others would do the same, but no. It turns out that is unusual.
EH: Unusual, I’ll say! How did you make the shift not only to a new field, but to a more independent life as a scholar and writer?
RL: At the time, I decided to put in cold calls to people I thought were doing interesting work: Joyce Toomre; Barbara Wheaton; Barbara Haber who were working on Russian, French, and American food history in Cambridge, Mass.; Alan Davidson, founder of the Oxford Symposium of Food and Cookery in England; Gene Anderson, the anthropologist and historian of Chinese cuisine; and the food writer Betty Fussell and the nutritionist Marion Nestle in New York. They could not have been more encouraging, inviting me to speak, join their groups, calling from England, and introducing me to others, including Elizabeth Andoh, expert on Japanese cuisine, and Ray Sokolov, then working for the Wall Street Journal, who had just published Why We Eat What We Eat, that examined long-distance exchanges of food. I was buoyed by this sense of community as I jumped fields and left academia.
EH: You weren’t even thinking whether the history of food was a serious area of study, were you?
RL: Not at all. I’ve always believed that if you can show people you are on to an important problem and have things to say about it, they will listen. Soon after I began working on food I spent a year as a research fellow at the now-defunct Dibner Institute for the History of Science and Technology at MIT. There, to the horror of many, I proposed a seminar on the European culinary revolution of the mid- seventeenth century when main dishes flavored with spices and sugar and the acid, often bread or nut-thickened sauces of the Middle Ages were abandoned. They were replaced by a rigid separation of salt and sweet courses and sauces based on fats, as well as by airy drinks and desserts. This was the beginning of high French cuisine.
I argued that this was due to the replacement of Galenic humoral theory by a new theory of physiology and nutrition deriving from the work of Paracelsus and accepted by the physicians in the courts of Europe. Once it became clear that my theory could account very precisely for the change in cuisine, they were all ears. A scholarly version won the Sophie Coe Prize of the Oxford Symposium on Food and Cookery and was published in the pioneering food history journal, Petits Propos Culinaires. And a popular version was later published by Scientific American.
EH: I am moved and impressed that you left academe with a plan. Many people would have just waited by the phone rather than build a new network. Yet your central concerns, as an independent scholar, remained the same as when you were teaching, and have come to full fruition in Cuisine and Empire. Food and technology require to be considered together, do they not?
RL: Indeed they do. Food, after all, is something we make. Plants and animals are simply the raw materials. We don't eat them until we have transformed them into something we regard as edible. Even raw foodists chop, grind, mix, and allow some heating. So I could bring to food history, the hard won conclusions of historians of technology.
EH: What are historians of technology mainly concerned with?
RL: Well, historians of technology are not primarily concerned with inventions. The infamous light bulb was useful only as part of a whole electrical system. Similarly soy sauce, say, or cake, have to be understood as part of whole culinary systems or cuisines. When these are transferred, disseminated, copied, they change the world.
And, perhaps most important, new ideas or prompt changes in technology. They cause cooks, for example, to come up with or adopt new techniques. As the shift to French high cuisine shows, if people change their minds about what healthy food is, they will change their cuisine. When they adopt new religious beliefs, Buddhism or Christianity, say, they abandon meat cooked in the sacrificial fire for enlightenment-enhancing foods such as sugar and rice in the case of Buddhism, or for periods of fasting in the case of Christianity. When they reject monarchy as a political system, as happened in republican Rome, the early Dutch republic, and in the early United States, they reject the extravagant dining associated with reinforcing kingly or imperial power.
So a large part of the book is dedicated to laying out the culinary philosophy underlying each of the world's great cuisines. When that culinary philosophy is transformed, so is the cuisine.
EH: Ah! Just one reason I am so excited about Cuisine and Empire is that I cannot think of anyone else who could take all this on, even if they thought to.
RL: My background in history of science and technology was a big help. It had become clear that this was not simply one damn experiment and discovery after another but shaped by great traditions of scientific inquiry shaped by atomism or Newtonianism or uniformitarianism, to turn to my specialty, geology. And I had explored the parallels between science and technology as cognitive systems, arguing that technology too was not just one invention after another but shaped by traditions of knowledge that, for example, specified materials, techniques, and ways of handling them in say, the evolution of gearing, or interchangeable parts, or jet engines.
My experience in Hawaii had already suggested that there were far reaching traditions in food too. So I asked “If even the history of the foods of Hawaii has to be told in terms of the cross-oceanic, cross-continent expansion of a few great culinary traditions, might not that also be true of world food history?"
Cuisine and Empire answers that with a resounding yes. It's possible to capture most of food history in the last 20,000 years by talking about the expansion of about a dozen different cuisines.
EH: I will be thinking about this book for years and years. I’m already starting to wonder what broad cultural assumptions, that I’ve never thought to identify, much less question, I must bring with me when I cook... These are assumptions about science and technology, too, because science exists within culture. Despite how well prepared -- I want to say uniquely prepared -- you were for writing Cuisine and Empire, it was a tremendously ambitious project, was it not?
RL: It was ridiculously ambitious.
EH: Now, this is a question everyone who writes will understand. Did it ever seem so huge and unwieldy you wanted to chuck it?
RL: More times than I care to admit. What was I writing about? Farming? Cooking? Dining? What were the big turning points? And what about all the regions such as Central Europe and Southeast Asia that got short shrift? On the other hand I had the wonderful gift of time to take on a big project and I didn’t want to fritter it away. So I gritted my teeth, kept re-working my organization, telling myself I was as well prepared as anyone.
EH: How so?
RL: On the practical side, I had grown up on a working farm. And I learned early on that cooking was just as important as farming. One of my earliest memories was the day my father decided he would make bread with the wheat he had grown. At the time, there was no internet to look up how this might be done. He put it in a pestle and pounded it. Nothing but flattened grains, even though many of the archaeologists in our part of the world assumed without experimenting that that was how it was done. He screwed the meat mincer on to the side of the large kitchen table and put the grains through that. Nothing but little lumps. Finally, he put a handful of grains on the flagstone floor and attacked them with a hammer. Fragments scattered all over the kitchen, but still no flour. With barns full of wheat, we could have starved because we did not know how to turn wheat into flour to make bread.
Later I had the chance to shop and cook in Europe, Australia, the USA and Mexico so I had a pretty good grip on a variety of cuisines. In Nigeria and Hawaii, I had experienced cuisines based on roots, not grains. At the University of Hawaii, I taught a wildly popular hands on world history of food, learning a huge amount from my students, almost all of them of Asian ancestry. And in Mexico, women taught me what my father couldn’t, namely how to grind grains into flour.
On the intellectual side, in the course of my academic life I’d also taught social history, an eye-opener about what life, including diet, was like for ordinary people until very recently. And at the University of Hawaii, with its polyglot population, I’d had a chance to talk with many of the pioneers of world history.EH: Unlike when you were writing The Food of Paradise, was there also a wave to catch? In the form of other like minded scholars and writers at work?
RL: A wave? If there was, it was more in world history than in food history, which in spite of the efforts of some fine scholars, did not really become mainstream until a few years ago. World historians such as William McNeill, Philip Curtin, Alfred Crosby and Jerry Bentley -- the latter my colleague at Hawaii -- were drawing on decades of detailed historical scholarship to see if they could trace big patterns of disease, warfare, enslavement, ecological change, and religious conversion.
Why shouldn't I jump into the fray and see if there were big patterns to be traced in food? Surely it was just as important in human history as their topics. I'd always loved making sense of masses of complicated data. Now here was a real challenge.
EH: Rachel, I expect lots of readers for your book. Which other books do you think it will be on the night table with? I’m thinking particularly of Michael Pollan and Bee Wilson -- is there a cogent comparison? I note Paul Freedman blurbed your book, by the way -- along with Naomi Duguid, Anne Willan, and Dan Headrick. Gee, good company!
RL: Well, if mine ends up on the night table with these books, I will be tickled pink. And I think it complements them nicely. Michael Pollan's recent book, wonderfully written as always, is a long meditation on contemporary cooking. I differ from him in not drawing a sharp distinction between cooking and processing. Processing (pre and post industrial) and cooking are on a continuum of stages in food preparation. Bee Wilson's delightful book is also about cooking and full of wonderful historical insights as befits a historian. But whereas she treats themes such as knife, fire, and measure, I organize by the origin, spread, and transformation of cuisines. In my wildest dreams, I would like to think of this as the historical counterpart to Harold McGee’s On Food and Cooking.
EH: Readers will be intrigued by your historical treatment of “processing.” It’s become a bad word –- code for turning food into non-food. I regularly read your blog, so I know you mean it a certain way that looks at the very big picture, including labor economics. But the food you personally like is emphatically not processed…
RL: Not if you limit “processed” to what many call junk food. I’ve never acquired a taste for fast-food hamburgers or soft drinks, have never eaten Wonder Bread or its siblings, and cook at home six nights out of seven. Picky is what I am. At the same time though, I think that we hinder our understanding of food if we don’t understand that all our food, with the exception of a few fruits, has been transformed, that is, processed, before we eat it. The foods that humans eat are one of their greatest creations, one of their greatest arts in that dual sense of technique and aesthetics, and we should celebrate that they are artifacts, not bemoan it. Like all human creations, some foods are better than others, and should be judged as such, but they are all creations.
EH: So there! How do cuisines speak to you personally -- as someone who loves food and cooking? If a cuisine does reveal a culture, then would tasting and analyzing it be as telling as listening to a poem or seeing a drama?
RL: Absolutely. Every time you go into the kitchen, you take your culture with you. As you plan a meal for guests, say, you bring to it assumptions about how to mesh their preferences with yours, about how much it is appropriate to spend on the meal, about how to accommodate their religious or ethical food rules, and about what they believe to be healthy and delicious.
I like to play a little game with myself when I go to a different country or meet someone from a different background. Knowing the history of that place or the heritage of that person, can I guess what the cuisine will be like? Or conversely, if presented with a meal, can I read it, dissecting, say, the noodles, the condiments, and the meat to tell a story about how it evolved over the centuries? And the answer is almost always yes.
EH: What holds a cuisine together?
RL: Again it was Hawaii that gave me the clue. It was not the local plants and animals because Hawaii had almost nothing edible before humans arrived. It was systems of belief or ideas or culture. The Pacific Islanders all valued taro, which had a place in their traditional religion, they all had a variant of the same herbal medicine. The Asians (apart from the Filipinos) had all been touched by Buddhism with its veneration of rice, and all subscribed to some form of humoral theory. And the Anglos came from a Christian tradition that placed high importance on raised bread and they followed modern nutritional theory.
EH: You have empires in the title, but you haven’t mentioned them yet. Where do they fit in?
RL: Empires have been the most widely spread form of political organization and as such the major theater in which cuisines have been created and disseminated. It's not a case of one empire, one cuisine, though. Because aspiring leaders always copy and adapt the customs of what they see as successful rivals, cuisines were copied and adapted from one empire to another. In the ancient world, for example, Persian cuisine was copied and adapted by the Indians and the Greeks, and then the Romans copied and adapted Greek cuisine.
EH: So cuisines spread from empire to empire. Is it a coherent story all around the world?
RL: Amazingly, yes. Beginning with the first states, interlinked barley-wheat cuisines underpin all the early empires. Then in the next phase, Buddhism transforms cuisines of eastern Asia, followed by the Islamic transformation of cuisines from Southeast Asia in the east to parts of Africa and Spain in the west (and the shaping of the Catholic cuisines of medieval Europe), and Catholic cuisines transform the cuisines of most of the Americas in the sixteenth century. Protestant critiques open the way to modern cuisines in Europe, with the rest of the world quick to make similar changes. Protestant-inspired high French cuisine becomes world high cuisine, Anglo cuisines create a middle way between high and humble cuisines, a middle way that is copied from Japan to Latin America in late nineteenth century. Although there are countless wrinkles, exceptions, and idiosyncrasies, at the core is a simple, coherent story of a few big families of cuisine and three major stages.
EH: If empires spread cuisines, does the reverse apply? Does food affect the success of empires, or smaller states? I have read in Jared Diamond about food affecting the success or failure of a whole society – the Norse colony in Greenland, whose people starved rather than ate fish for instance. What about embracing a culturally new food for political reasons?
RL: Certainly most people in the past believed that food could affect the success or failure of a whole society. At the end of the nineteenth century, for example, leaders around the world looked at what seemed to be the unstoppable expansion of the Anglo world, that is, the British Empire and the United States of America.
One explanation was that Anglo strength derived from a cuisine based on white wheaten bread and beef served at family meals. Unlike alternative explanations such as the special characteristics of Anglos or their upbringing in bracing climates, this offered a strategy for countering this expansion. If you could persuade your subjects or citizens to abandon corn or rice or cassava, and shift to bread or pasta, if you could persuade them to eat more meat, if you could persuade them to eat as families, then they might become stronger.
EH: Well, I’m naïve, then. “Eating as a family” is not a given across cultures? Please tell me more.
RL: The importance of the family meal as the foundation of society and the state is so deeply ingrained in the American tradition that it’s hard to appreciate just how American it is, perhaps inherited from Dutch settlers. Of course many meals were prepared in the home throughout history, though institutional food was more important than we realize. Just think of the courts, the military, the religious orders, as well as prisons, boarding schools, poor houses, and so on. Just think of the pictures of dining in the past and how rarely it is a family that is depicted. Who you ate with reflected rank rather than family ties.
But even when prepared in the home, the meal was often very different from that depicted in Norman Rockwell’s “Freedom from Want.” The children might eat in the nursery, as in nineteenth-century middle class England. Or the father might eat in a different place and at a different time from the wife, as in Japan. Or the father might eat food prepared by different wives on different days, as in Nigeria. Or the meal might include unrelated apprentices and farmhands. So to many societies, the idea of the communal family meal as offering both physical and moral/social nourishment was a novelty.
EH: And the shift to bread, pasta, and meat?
RL: Even in the United States, there were concerted efforts to persuade southerners, particularly in the Appalachians, to abandon corn bread for biscuits of wheat flour. And Brazilians, Mexicans, Venezuelans, Colombians, Indians, and Chinese debated, and often put in place policies to bring about this change. The most successful efforts were in Japan where the diets of the military and of people living in cities were changed to add more meat, more fat, more wheat, and to introduce family meals.
EH: Ah! Taking on the strength of the aggressor, or of the dominant culture! I wonder who’s doing that right now, and with regard to whose food… I’m fascinated with the cover of Cuisine and Empire. I know it’s a Japanese print. I wanted it to be the Jesuits, but that’s centuries off the mark.
RL: It’s a print in the Library of Congress collection by the Japanese artist, Yoshikazu Utagawa, made in 1861 just a few years after the forcible opening of Japan to the West. It shows two Americans, great big fellows, one of them baking bread in a beehive oven and the other preparing a dish over a bench top stove. I chose it because it so nicely illustrates the themes of the book. It puts the kitchen at the center. And it shows the keen interest that societies took in observing, and often copying, the cuisines of rivals.
EH: The kitchen at the center of history -- a beautiful phrase. The book launches very soon.
RL: I believe the official launch date is in November. Copies, though, will be available this week.
EH: Well, mine will arrive today or tomorrow. Thank you so much for this fascinating preview and discussion. I’m already thinking how to incorporate 20,000 years of causality into the book party menu.
A different version of this interview, emphasizing gastronomy in history, is available at The Rambling Epicure.
Read Rachel’s article for SaudiAramco World on the Islamic influence on Mexican Cuisine
Read Rachel’s personal blog, “A Historian’s Take on Food and Food Politics” at http://www.rachellaudan.com/Live in or around Boston? Come with me to a talk by Rachel Laudan the evening of October 28 at BU!
Monday, August 19, 2013
by Jalees Rehman
The "Reclaim Scientism" movement is gaining momentum. In his recent book "The Atheist's Guide to Reality: Enjoying Life without Illusions", the American philosopher Alexander Rosenberg suggests that instead of viewing the word "scientism" as an epithet, atheists should expropriate it and use it as a positive term which describes their worldview. Rosenberg also provides a descriptive explanation of how the term "scientism" is currently used:
Scientism — noun; scientistic — adjective.
Scientism has two related meanings, both of them pejorative. According to one of these meanings, scientism names the improper or mistaken application of scientific methods or findings outside their appropriate domain, especially to questions treated by the humanities. The second meaning is more common: Scientism is the exaggerated confidence in the methods of science as the most (or the only) reliable tools of inquiry, and an equally unfounded belief that at least the most well established of its findings are the only objective truths there are.
Rosenberg's explanation of "scientism" is helpful because it highlights the difference between science and scientism. Science refers to applying scientific methods as tools of inquiry to collect and interpret data, whereas "scientism" refers to cultural and ideological views promoting the primacy or superiority of scientific methods over all other tools of inquiry. Some scientists embrace scientistic views, in part because scientism provides a much-needed counterbalance to aggressive anti-science attitudes that are prevalent on both ends of the political spectrum and among some religious institutions. However, other scientists are concerned about propping up scientism as a bulwark against ideological science-bashing because it smacks of throwing out the baby with the bathwater. Science is characterized by healthy skepticism, the dismantling of dogmatic views and a continuous process of introspection and self-criticism. Infusing science with ideological stances concerning the primacy of the scientific method could undermine the power of science which is rooted in its willingness to oppose ideological posturing.
As a scientist who investigates signaling mechanisms and the metabolic activity of stem cells, I am concerned about the rise of some movements that fall under the "scientism" umbrella, because they have the possibility to impede scientific discovery. Scientific progress relies on recognizing the limitations and flaws in existing scientific concepts and refuting scientific views that cannot be adequately explained by newer scientific observations. An exaggerated confidence in the validity of scientific findings could stifle such refutations. For example, some of the most widely cited scientific papers in the field of stem cell biology cannot be replicated, but they have had an enormous detrimental impact on the science and medicine, in part because of an exaggerated faith in the validity of some initial experiments.
I first began studying the use of stem and progenitor cells to enhance cardiovascular repair and regeneration over a decade ago. At that time, many of my colleagues and I were excited about a recent paper published by a group of scientists based at New York Medical College in the high-profile scientific journal Nature in 2001. The paper suggested that injected adult bone marrow stem cells could be successfully converted into functional heart cells and recover heart function after a heart attack by generating new heart tissue. The usage of adult regenerative cells was a very attractive option because it would allow patients to be treated with their own cells and could circumvent the ethical and political controversies associated with embryonic stem cells. This animal study gained even more traction when supportive experimental and human studies were published by other scientists. Then a German research group under the direction of the cardiologist Bodo Strauer published a paper in 2002 which showed that not only could adult human bone marrow cells be safely injected into heart attack patients but that these adult cells even appeared to improve heart function.
The stir caused by these discoveries was not just confined to scientists. The findings were widely reported in the media and I recall numerous discussions with physicians who claimed that cardiovascular disease would soon be a problem of the past, because patients would receive routine bone marrow injections after heart attacks. One colleague even advised me to reconsider my career choices since the usage of bone marrow cells could address most if not all issues in cardiovascular regeneration.
This excitement was somewhat dampened when a refutation of the 2001 Nature paper was published in 2004, also in the journal Nature. A collaborative effort of two US-based stem cell research groups was not able to replicate the findings of the 2001 paper. The scientists were unable to find any significant conversion of adult bone marrow cells into functional heart cells. However, many physicians, scientists and patients had already adopted an unshakable belief in the validity of the bone marrow cell treatments after heart attacks. Hundreds of heart attack patients were being enrolled in clinical trials involving the injection of bone marrow cells. Clinics in Thailand or Mexico began offering bone marrow injections to heart patients from all around the world– for a hefty price, both in terms of monetary payments and in terms of safety because they exposed patients to the risks of invasive injections of bone marrow cells into their hearts.
Despite the fact that the initial clinical studies with small numbers of enrolled patients had shown a beneficial effect of bone marrow cell injections, subsequent trials could not confirm these early successes. It became apparent that even if bone marrow cell injections did exert a therapeutic benefit in heart attack patients, these benefits were rather modest. Scientists increasingly realized that the observed benefits may have been causally unrelated to the small fraction of stem cells contained within the bone marrow. Instead of bone marrow stem cells becoming functional heart cells, some bone marrow cells may have merely released protective proteins which could explain the slight improvement in heart function, without necessarily generating new heart tissue. One of the largest bone marrow cell treatment trials for heart attack patients to date was just recently published in 2013 and showed no evidence of improved heart function following the cell injections.
In hindsight, many of us have wondered why we were not more skeptical of the initial findings. When compared to embryonic stem cells, adult bone marrow stem cells have a very limited ability to differentiate into cell types other than those typically found in the bone marrow. Furthermore, the clinical studies which reported successful treatment of heart attack patients used unpurified bone marrow cells from the patients. The stem cell content of such unpurified preparations is roughly 1% or less, which means that 99% of the injected bone marrow cells were NOT stem cells. For the tiny fraction of bona fide stem cells in the bone marrow to convert into sufficient numbers of beating heart cells and even create new functional heart tissue would have been akin to a miracle.
Critical thinking and healthy skepticism, the scientific peer review processor and even common sense should have alerted us to the problems associated with these claims, but they all failed. Perhaps scientists, physicians and patients were so excited by the prospect of creating new heart tissue that they suspended much-needed skepticism. Exaggerated confidence in the validity of the scientific data published in highly regarded scientific journals may have played an important role. Unintentional cognitive biases of scientists who conducted the experiments and a disregard for alternative explanations could have also contributed to the propagation of ideas that would withstand subsequent testing. Scientific misconduct may have also been a factor, as the cardiologist who conducted the first clinical studies with bone marrow cell infusions in heart attack patients is currently under investigation for massive errors in how the experiments were conducted and reported.
This is just one example to illustrate problems associated with an exaggerated confidence in the validity of scientific findings, a kind of confidence which scientism engenders. Such examples are by no means restricted to stem cell biology. A recent analysis of scientific reproducibility in cancer research claimed that only 11% of published cancer biology papers could be independently validated, and other areas of scientific research may be similarly afflicted by the problem of irreproducibility of published, peer-reviewed scientific papers.
Increasing numbers of scientists are recognizing that current approaches to interpreting and publishing scientific data are severely flawed. Exaggerated confidence in the validity of scientific findings is frequently misplaced and claims that scientific results represent objective truths need to be re-evaluated particularly when a high percentage of experimental results cannot be replicated by fellow scientists. In this particular context, the views of scientists who are trying to learn lessons from the failures of the scientific peer review process are not so different from those of "scientism" critics. However, many scientists, myself included, remain reluctant to use the expression "scientism".
Rosenberg illustrates the problems associated with the word "scientism". Since "scientism" is often used as an epithet, invoking "scientism" may impede constructive discussions about the appropriateness of applying scientific methods. While a question such as "Can issues of morality be answered by scientific experiments?" may be important, introducing the term "scientism" with all its baggage distracts from addressing the question in a rational manner.
The other major issue associated with the term "scientism" is its vagueness. It is difficult to discuss "scientism" if it encompasses a broad range of distinct concepts such as the notion that science has to remain within certain boundaries as well as a criticism of overweening confidence in the validity of scientific findings. I can easily identify with asking for a realistic reappraisal of whether or not scientific results obtained by one laboratory constitute an objective, scientific truth but I am opposed to creating boundary lines that forbid certain forms of scientific inquiry because it might infringe on the domains of the humanities. Instead of using the diffuse expression "scientism", I have thus introduced the term "science mystique" to criticize the exaggerated, near-mythical confidence in the infallibility of scientific results.
Rosenberg's view that the expression "scientism" and also the culture of "scientism" should be embraced received a big boost when the scientist Steven Pinker published his polemic essay "Science Is Not Your Enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians". Like Rosenberg, Pinker wants to rehabilitate the expression "scientism" and use it to indicate a positive, science-affirming worldview. Unfortunately, instead of engaging in a constructive dialogue about the culture of "scientism", Pinker reveals his condescending attitude towards the humanities throughout the essay. His notion of respect for the humanities consists of pointing out how much better off classical philosophers might have been if they had been aware of modern neuroscience. But Pinker does not comment on the converse proposition: Would scientists be better off if they knew more about philosophy? Pinker goes on to portray scientists as dynamic forward thinkers, while humanities scholars are supposedly weighed down by their intellectual inertia:
"Several university presidents and provosts have lamented to me that when a scientist comes into their office, it's to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it's to plead for respect for the way things have always been done."
Pinker glosses over the reproducibility issues in science and reaffirms his faith in the current system of scientific peer review without commenting on the limitations of scientific peer review:
"Scientism, in this good sense, is not the belief that members of the occupational guild called "science" are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable."
The philosopher and scientist Massimo Pigliucci has written an excellent response to Steven Pinker, which discusses the flaws inherent in Pinker's polemic and explains why promoting a culture of scientism or a "science mystique" is not in the interest of science. I also agree with the physicist Sean Carroll who reminds us that we should get rid of the term "scientism"; not because he wants to get rid of a critical evaluation of science, but because he thinks this poorly defined term is not very helpful.
Whether or not we use the word "scientism", it is apparent that the debates between the critics and defenders of the culture of "scientism" are here to stay. It is unlikely that rehabilitating the unhelpful word "scientism" or polemical stances towards the humanities will contribute to this debate in a meaningful manner. The challenge for scientists and non-scientists is to embrace and address the legitimate criticisms of science without promoting the agenda of irrational anti-science bashing.
Monday, July 22, 2013
Three Seconds: Poems, Cubes and the Brain
by Jalees Rehman
A child drops a chocolate chip cookie on the floor, immediately picks it up, looks quizzically at a parental eye-witness and proceeds to munch on it after receiving an approving nod. This is one of the versions of the "three second rule", which suggests that food can be safely consumed if it has had less than three seconds contact with the floor. There is really no scientific basis for this legend, because noxious chemicals or microbial flora do not bide their time, counting "One one thousand, two one thousand, three one thousand,…" before they latch on to a chocolate chip cookie. Food will likely accumulate more bacteria, the longer it is in contact with the floor, but I am not aware of any rigorous scientific study that has measured the impact of food-floor intercourse on a second-to-second basis and identified three seconds as a critical temporal threshold. Basketball connoisseurs occasionally argue about a very different version of the "three second rule", and the Urban Dictionary provides us with yet another set of definitions for the "three second rule", such as the time after which one loses a vacated seat in a public setting. I was not aware of any of these "three second rule" versions until I moved to the USA, but I had come across the elusive "three seconds" time interval in a rather different context when I worked at the Institute of Medical Psychology in Munich: Stimuli or signals that occur within an interval of up to three seconds are processed and integrated by our brain into a "subjective present".
I joined the Institute of Medical Psychology at the University of Munich as a research student in 1992 primarily because of my mentor Till Roenneberg. His intellect, charm and infectious enthusiasm were simply irresistible. I scrapped all my plans to work on HIV, cancer or cardiovascular disease and instead began researching the internal clock of marine algae in Till's laboratory – in an Institute of Medical Psychology. Within weeks of working at the institute, I realized how fortunate I was. Ernst Pöppel, one of Germany's leading neuroscientists and the director of the institute, had created a multidisciplinary research heaven. Ernst assembled a team of remarkably diverse researchers who studied neurobiology, psychology, linguistics, mathematics, philosophy, endocrinology, cell physiology, marine biology, computer science, ecology – all on the same floor. Since I left the institute nearly 20 years ago, I have worked in many academic departments at various institutions, each claiming to value multidisciplinary studies, but I have never again encountered any place that has been able to successfully integrate natural sciences, social sciences and the humanities in the same way as the Munich institute.
The central, unifying theme of the institute was time. Not physical time, but biological and psychological time. How does our brain perceive physical time? What is the structure of perceived time? What regulates biological oscillations in humans, animals and even algae? Can environmental cues modify temporal perception? The close proximity of so many disciplines made for fascinating coffee-break discussions, forcing us to re-evaluate our own research findings in the light of the discoveries made in neighboring labs and inspired us to become more creative in our experimental design.
Some of the most interesting discussions I remember revolved around the concept of the subjective present, i.e. the question of what it is that we perceive as the "now". Our brain continuously receives input from our senses, such as images we see, sounds we hear or sensations of touch. For our brain to process these stimuli appropriately, it creates a temporal structure so that it can tell apart preceding stimuli from subsequent stimuli. But the brain not only assigns a temporal order to the stimuli, it also integrates them and conveys to us a sense of the subjective past and the subjective present. We often use vague phrases such as "living in the moment" and we all have a sense of what is the "now", but we do not always realize what time intervals we are referring to. If we just saw an image or heard a musical note one second ago, physical time would clearly place them in "the past". Decades of research performed by Ernst Pöppel and his colleagues at the institute, as well as several other laboratories around the world, suggest that our brain integrates our subjective temporal reality in chunks of approximately three second duration.
Temporal order can be assessed in a rather straightforward experimental manner. Research subjects can be provided sequential auditory clicks, one to each ear. If the clicks are one second apart, nearly all participants can correctly identify whether or not the click in the right ear came before the one in the left ear. It turns out that this holds true even if the clicks are only 100 milliseconds (0.1 seconds) apart. The threshold for being able to correctly assign a temporal order to such brief stimuli lies around 30 milliseconds for young adults (up to 25 years old) and 60 milliseconds for older adults.
Temporal integration of stimuli, on the other hand, cannot be directly measured through experiments. It is not possible to ask research subjects "Are these two stimuli part of your now?" and expect a definitive answer, because everyone has a different concept and definition of what constitutes "now". Therefore, researchers such as Ernst Pöppel have had to resort to indirect assessments of temporal integration, and ascertain what interval of time is grasped as a perceptual unit by our brain. An excellent summary of the work can be found in the paper "A hierarchical model of temporal perception". Instead of reviewing the hundreds of experiments that have lead researchers to derive the three-second interval, I will just review two studies which I believe are among the most interesting.
In one of the studies, Pöppel partnered up with the American poet Frederick Turner. Turner and Pöppel recorded and measured hundreds of Latin, Greek, English, Chinese, Japanese, French and German poems, analyzing the length of each LINE. They used the expression LINE to describe a "fundamental unit of metered poetry". In many cases, a standard verse or line in a poem did indeed fit the Turner-Pöppel definition of a LINE, but they used the more generic LINE for their analysis because not all languages or orthographic traditions write or print a LINE in a separate space as is common in English or German poems. If a long line in a poem was divided by a caesura into two sections, Turner and Pöppel considered this to be two LINES.
The basic idea behind this analysis was that each unit of a poem (LINE) conveys one integrated idea or thought, and that the reader experiences each LINE as a "now" moment while reading the poem. Turner and Pöppel published their results in the classic essay "The Neural Lyre: Poetic Meter, the Brain, and Time" for which they also received the Levinson Prize in 1983. Their findings were quite remarkable. The peak duration of LINES in poems was between 2.5 seconds and 3.5 seconds, independent of what language the poems were written in. For example, 73% of German poems had a LINE duration between 2 and 3 seconds. Here are some their other specific findings:
Epic meter (a seven-syllable line followed by a five-syllable one) (average) 3.25 secs.
Waka (average) 2.75 secs.
Tanka (recited much faster than the epic, as 3 LINES of 5, 12, and 14 syllables) (average) 2.70 secs.
Four-syllable line 2.20 secs.
Five-syllable line 3.00 secs.
Seven-syllable line 3.80 secs.
Pentameter 3.30 secs.
Seven-syllable trochaic line 2.50 secs.
Stanzas using different line lengths 3.00 secs., 3.10 secs.
Ballad meter (octosyllabic) 2.40 secs.
Poets all around the world did not conspire to write three-second LINES. It is more likely that our brain may be attuned to processing poetic information in 3 second chunks and that poets are subconsciously aware of this. This was not a controlled, rigorous scientific study, but the results are nevertheless fascinating, not only because they points towards the three second interval that neuroscientists have established in recent decades for temporal integration in the brain, but also because they suggest that the rules for metered poetry may be universal. I strongly advise everyone to read the now classic essay by Turner and Pöppel, to then try reading aloud their own favorite poems and see if the LINES indeed approximate three seconds.
A second approach to glean into the inner workings of temporal integration process in our brain is the use of perceptual reversal experiments, such as those performed with the Necker cube. This cube is a 2-D line drawing, which our brain perceives as a cube – or actually two distinct cubes. Most people who stare at the drawing for a while will notice that their mind creates two distinct cube representations. Once the mind perceives the two different cubes, it becomes very difficult to cling to just one cube representation. Our brain starts flip-flopping between the two cubes; even when we try our best to just hang on to one of the cube representations in our mind. Interestingly, the average duration that it takes for our mind to automatically shift from one cube representation to the other one approximates three seconds.
Nicole von Steinbüchel, a colleague of Ernst Pöppel at the Institute of Medical Psychology, asked a fascinating question. If the oscillatory perceptual shift between the two cube representations is indeed indicative of the "subjective present" and the temporal integration capacity, would brain injury affect the oscillation? She studied patients who had brain lesions (usually due to a stroke) in either the left or right hemisphere of the brain. She and her team of researchers were able to show that while healthy participants reported a three second interval between the automatic shifting of the cube representations in their brain, the average shift time was four seconds in patients with brain damage in the left brain hemisphere and up to six seconds if the damage had occurred in a certain part of the right brain hemisphere. Nicole von Steinbüchel's research demonstrates the clinical relevance of studying temporal integration, but it also suggests that the brain may have designated areas which specialize in creating a temporal structure.
The analysis of poetry and the Necker cube experiments are just two examples of cognitive studies indicating that our brain uses three second intervals to process information and generate the experience of the "now" or the "subjective present". Taken alone, none of these studies are a conclusive proof that our brain uses three second intervals, but one cannot help but notice a remarkable convergence of data pointing towards a cognitive three second rule.
Frederick Turner and Ernst Pöppel (1983) "The Neural Lyre: Poetic Meter, the Brain, and Time" Poetry 142(5): 277-309. A reprint also available online here: http://www.cosmoetica.com/B22-FT2.htm
Ernst Pöppel (1997) "A hierarchical model of temporal perception" Trends in Cognitive Sciences 1(2): 56-61.
Nicole von Steinbüchel (1998) "Temporal ranges of central nervous processing: clinical evidence" Experimental Brain Research 123 (1-2): 220-233.
Monday, February 04, 2013
The Science Mystique
by Jalees Rehman
Many of my German high school teachers were intellectual remnants of the “68er” movement. They had either been part of the 1968 anti-authoritarian and left-wing student protests in Germany or they had been deeply influenced by them. The movement gradually fizzled out and the students took on seemingly bourgeois jobs in the 1970s as civil servants, bank accountants or high school teachers, but their muted revolutionary spirit remained on the whole intact. Some high school teachers used the flexibility of the German high school curriculum to infuse us with the revolutionary ideals of the 68ers. For example, instead of delving into Charles Dickens in our English classes, we read excerpts of the book “The Feminine Mystique” written by the American feminist Betty Friedan.
Our high school level discussion of the book barely scratched the surface of the complex issues related to women’s rights and their portrayal by the media, but it introduced me to the concept of a “mystique”. The book pointed out that seemingly positive labels such as “nurturing” were being used to propagate an image of the ideal woman, who could fulfill her life’s goals by being a subservient and loving housewife and mother. She might have superior managerial skills, but they were best suited to run a household and not a company, and she would need to be protected from the aggressive male-dominated business world. Many women bought into this mystique, precisely because it had elements of praise built into it, without realizing how limiting it was to be placed on a pedestal. Even though the feminine mystique has largely been eroded in Europe and North America, I continue to encounter women who cling on to this mystique, particularly among Muslim women in North America who are prone to emphasize how they feel that gender segregation and restrictive dress codes for women are a form of “elevation” and honor. They claim these social and personal barriers make them feel unique and precious.
Friedan’s book also made me realize that we were surrounded by so many other similarly captivating mystiques. The oriental mystique was dismantled by Edward Said in his book “Orientalism”, and I have to admit that I myself was transiently trapped in this mystique. Being one of the few visibly “oriental” individuals among my peers in Germany, I liked the idea of being viewed as exotic, intuitive and emotional. After I started medical school, I learned about the “doctor mystique”, which was already on its deathbed. Doctors had previously been seen as infallible saviors who devoted all their time to heroically saving lives and whose actions did not need to be questioned. There is a German expression for doctors which is nowadays predominantly used in an ironic sense: “Halbgötter in Weiß” – Demigods in White.
Through persistent education, books, magazine and newspaper articles, TV shows and movies, many of these mystiques have been gradually demolished.It has become common knowledge that women can be successful as ambitious CEOs or as brilliant engineers. We now know that “Orientals” do not just indulge their intuitive mysticism but can become analytical mathematicians. People readily accept the fact that doctors are human, they make mistakes and their medical decisions can be influenced by pharmaceutical marketing or by spurious squabbles with colleagues. One of my favorite TV shows was the American medical comedy Scrubs, which gave a surprisingly accurate portrayal of what it meant to work in a hospital. It was obviously fictional and contained many exaggerations to increase its comedic impact, but I could relate to many of the core themes presented in the show. The daily frustrations of being a physician-in-training or a senior attending physician, the fact that physicians make mistakes, the petty fights among physicians that can negatively impact their patients, the immense stress of having to deal with patients who cannot be helped, financial incentives, physicians and nurses with substance abuse problems – these were all challenges that either I or my friends and colleagues had experienced.
One lone TV show such as Scrubs cannot be credited for taking down the “doctor mystique”, but it did provide a vehicle for us physicians to talk about the “dark side of medicine”. Speaking about flawed clinical decision-making and how personal emotions can affect our interactions with patients is not easy for physicians, because this form of introspection can lead to paralyzing guilt. All physicians know they make mistakes, and even though we ourselves do not buy into the “doctor mystique”, we may still feel the burden of having live up to it. I remember how I used to discuss some of the Scrubs episodes with other physicians and these light-hearted conversations about funny scenes in the TV show sometimes led to deeper discussions about our own personal experiences and the challenges we faced in our profession.
Being placed on a pedestal is a form of confinement. Dismantling mystiques not only liberates the individuals who are being mystified, but it can also benefit society as a whole. In the case of the doctor mystique, patients are now more likely to question the decisions of physicians, thus forcing doctors to explain why they are prescribing certain medications or expensive procedures. The internet enables patients to obtain information about their illnesses and treatment options. Instead of blindly following doctors’ orders, they want to engage their doctor in a discussion and become an integral part of the decision-making process. The recognition that gifts, free dinners and honoraria paid by pharmaceutical companies strongly influence what medications doctors prescribe has led to the establishment of important new rules at universities and academic journals to curb this influence. Many medical schools now strongly restrict interactions between pharmaceutical company representatives and physicians-in-training. Academic journals and presentations at universities or medical conferences require a complete disclosure of all potential financial relationships that could impact the objectivity of the presented data. Some physicians may find these regulations cumbersome and long for the “mystique” days when their intentions were not under such scrutiny, but many of us think that these changes are making us better physicians and improving medical care.
As I watch many of these mystiques crumble, one mystique continues to persist: The Science Mystique. As with other mystiques, it consists of a collage of falsely idealized and idolized notions of what science constitutes. This mystique has many different manifestations, such as the firm belief that reported scientific findings are absolutely true beyond any doubt, scientific results obtained today are likely to remain true for all eternity and scientific research will be able to definitively solve all the major problems facing humankind. This science mystique is often paired with an over-simplified and reductionist view of science. Some popular science books, press releases or newspaper articles refer to scientists having discovered the single gene or the molecule that is responsible for highly complex phenomena, such as a disease like cancer or philosophical constructs such as morality. I was recently discussing a recent paper on wound healing and I came across an intriguing comment in a public comment thread: “When I read an article related to science it puts me in the mindset of perfection and credibility”. This is just one anecdotal comment, but I think that it captures the Science Mystique held by many non-scientists who place science on a pedestal of perfection.
As flattering as it may be, few scientists see science as encapsulating perfection. Even though I am a physician, most of my time is devoted to working as a cell biologist. My laboratory currently studies the biology of stem cells and the role of mitochondrial metabolism in stem cells. In the rather antiquated division of science into “hard” and “soft” sciences, where physics is considered a “hard” science and psychology or sociology are considered “soft” sciences, my field of work would be considered a middle-of-the-road, “firm” science. As cell biologists, we are able to conduct well-defined experiments, falsify hypotheses and directly test cause-effect relationships. Nevertheless, my experience with scientific results is that they are far from perfect and most good scientific work usually raises more questions than it provides answers. We scientists are motivated by our passion for exploration, and we know that even when we are able to successfully obtain definitive results, these findings usually point out even greater deficiencies and uncertainties in our knowledge. Stuart Firestein’s wonderful book “Ignorance: How It Drives Science” is a sincere and eloquent testimony to the key role of ignorance in scientific work. A thoughtful “I do not know the answer to this” uttered by a scientist is typically seen as a sign of scientific maturity, because it shows humility of the scientist and indicates a potential new direction for scientific research. On the other hand, when a scientist proudly proclaims to have found the most important gene or having defined the most important pathway for a certain biological process, it frequently indicates a lack of understanding of the complexity of the matter at hand.
One key problem of science is the issue of reproducibility. Psychology is currently undergoing a soul-searching process because many questions have been raised about why published scientific findings have such poor reproducibility when other psychologists perform the same experiments. One might attribute this to the “soft” nature of psychology, because it deals with variables such as emotions that are difficult to quantify and with heterogeneous humans as their test subjects. Nevertheless, in my work as a cell biologist, I have encountered very similar problems regarding reproducibility of published scientific findings. My experience in recent years is that roughly only half the published findings in stem cell biology can be reproduced when we conduct experiments according to the scientific methods and protocols of the published paper.
This estimate of 50% reproducibility is not a comprehensive analysis. We only attempt to replicate findings which are highly relevant to our work and which are published in a select group of scientific journals. If we tried to replicate every single paper in the field of stem cell biology, the success rate might be even lower. On the other hand, we devote a limited amount of time and resources to replicating results, because there is no funding available for replication experiments. It is possible that if we devoted enough time and resources to replicate a published study, tinkering with the different methods, trying out different batches of stem cells and reagents, we might have a higher likelihood of being able to replicate the results. Since negative studies are difficult to publish, these failed attempts at replication are buried and the published papers that cannot be replicated are rarely retracted. When scientists meet at conferences, they often informally share their respective experiences with attempts to replicate research findings. These casual exchanges can be very helpful, because they help us ensure that we do not waste resources to build new scientific work on the shaky foundations of scientific papers that cannot be replicated.
In addition to knowing that a significant proportion of published scientific findings cannot be replicated, scientists are also aware of the fact that scientific knowledge is dynamic. Technologies used to acquire scientific data are continuously changing and the new scientific data amassed during any single year by far outpaces the capacity of scientists to fully understand and analyze it. Most scientists are currently struggling to keep up with the new scientific knowledge in their own field, let alone put it in context with the existing literature. As I have previously pointed out, more than 30-40 scientific papers are published on average on any given day in the field of stem cell biology. This overwhelming wealth of scientific information inevitably leads to a short half-life of scientific knowledge, as Samuel Arbesman has expressed in his excellent book “The Half-Life of Facts”. What is considered a scientific fact today may be obsolete within five years. The books by Firestein and Arbesman are shining examples among the plethora of recent popular science books, because they explain why scientific knowledge is so ephemeral and yet so important. Hopefully, these books will help deconstruct the Science Mystique.
One aspect of science that receives comparatively little attention in popular science discussions is the human factor. Scientific experiments are conducted by scientists who have human failings, and thus scientific fallibility is entwined with human frailty. Some degree of limited scientific replicability is intrinsic to the subject matter itself. A paper on cancer cells published by one group of researchers may use a different set of cancer cells obtained from their patients than those available to other researchers. At other times, researchers may make unintentional mistakes in interpreting their data or may unknowingly use contaminated samples. One can hardly blame scientists for heterogeneity of their tested samples or for making honest errors. However, there are far more egregious errors made by scientists that have a major impact on how science is conducted. There are cases of outright fraud, where researchers just manufacture non-existent data, but these tend to be rare and when colleagues and scientific journals or organizations become aware of these cases of fraud, published papers are retracted and scientists face punitive measures. Such overt fraud tends to be unusual, and of the hundred or more scientific colleagues who I have personally worked with, I do not know of any one that has committed such fraud. However, what occurs far more frequently than gross fraud is the gentle fudging of scientific data, consciously or subconsciously, so that desired scientific results are obtained. Statistical outliers are excluded, especially if excluding them helps direct the data in the desired direction. Like most humans, scientists also have biases and would like to interpret their data in a manner that fits with their existing concepts and ideas.
Human fallibility not only affects how scientists interpret and present their data, but can also have a far-reaching impact on which scientific projects receive research funding or the publication of scientific results. When manuscripts are submitted to scientific journals or when grant proposal are submitted to funding agencies, they usually undergo a review by a panel of scientists who work in the same field and can ultimately decide whether or not a paper should be published or a grant funded. One would hope that these decisions are primarily based on the scientific merit of the manuscripts or the grant proposals, but anyone who has been involved in these forms of peer review knows that, unfortunately, personal connections or personal grudges can often be decisive factors.
Lack of scientific replicability, knowing about the uncertainties that come with new scientific knowledge, fraud and fudging, biases during peer review – these are all just some of the reasons why scientists rarely believe in the mystique of science. When I discuss this with acquaintances who are non-scientists, they sometimes ask me how I can love science if I have encountered these “ugly” aspects of science. My response is that I love science despite this “ugliness”, and perhaps even because of its “ugliness”. The fact that scientific knowledge is dynamic and ephemeral, the fact that we do not need to feel embarrassed about our ignorance and uncertainties, the fact that science is conducted by humans and is infused with human failings, these are all reasons to love science. When I think of science, I am reminded of the painting “Basket of Fruit” by Caravaggio, which is a still-life of a fruit bowl, but unlike other still-life paintings of fruit, Caravaggio showed discolored and decaying leaves and fruit. The beauty and ingenuity of Caravaggio’s painting lies in its ability to show fruit how it really is, not the idealized fruit baskets that other painters would so often depict.
The challenge that we scientists face is to share our love for science despite its imperfections with those around us who do not actively work in the field of science. I remember speaking to a colleague of mine in the context of a wonderful spoof of a Lady Gaga song called “Bad Project”. We both agreed that the spoof was spot on, showing frustrations of a PhD student not being able to get experiments to work, having to base experiments on poorly documented lab note books and the tedious nature of scientific work. My colleague was concerned that if such spoofs ridiculing laboratory work became too common, it would embolden the American anti-science movement that is already very strong. Anyone who closely follows American science politics knows that creationists and global-warming deniers are constantly looking for opportunities to find any flaws in scientific studies and that they use rare occasional errors as opportunities to suggest that well-established and replicated scientific results or theories should be discarded. In addition to the agenda of these specific anti-science interest groups, there are also many groups lobbying for severe budget cuts, many which would negatively impact US research funding, which is already at an alarmingly low level.
My response to these concerns is that it is our job as scientists to convince fellow citizens how important science is, despite its limitations and flaws. The fact that scientists recognize the uncertainties and limitations of scientific knowledge is not a weakness, but a strength of the scientific approach and makes it ideally suited to help us understand our world. Enabling a false mystique of science as being definitive and perfect is not going to benefit science or society in the long run. Instead, recognizing our failings and limitations in science and openly discussing them with our fellow citizens is going to help us improve how we conduct science. I think that anyone who carefully looks at Caravaggio’s “imperfect” painting eventually sees its beauty and falls in love with it. I hope that we scientists will be able to share the Caravaggio view of science with the general public.
Image Credits: Painting Basket of Fruit by Caravaggio via Wikimedia Commons
Ecology’s Image Problem
“There are tories in science who regard imagination as a faculty to be avoided rather than employed. They observe its actions in weak vessels and are unduly impressed by its disasters” —John Tyndall, 1870
In his 1881 essay on Mental Imagery, Francis Galton noted that few Fellows of the Royal Society or members of the French Institute, when asked to do so, could imagine themselves sitting at the breakfast-table from which presumably they had only recently arisen. Members of the general public, women especially, fared much better, being able to conjure up vivid images of themselves enjoying their morning meal. From this Galton, an anthropologist, noted polymath, and eugenicist, concluded that learned men, bookish men, relying as they do on abstract thought, depend on mental images little, if at all.
In this rejection of the scientific role for the imagination Galton was in disagreement with Irish physicist John Tyndall who in a 1870 address to the British Association in Liverpool entitled The Scientific Use of the Imagination claimed that in explaining sensible phenomena, scientists habitually form mental images of that which is beyond the immediately sensible. "Newton’s passage from a falling apple to a falling moon”, Tyndall wrote, “was, at the outset, a leap of the prepared imagination.” The imagination, Tyndall claimed, is both the source of poetic genius and an instrument of discovery in science.
The role of the imagination is chemistry, is well enough known. In 1890 the German Chemical Society celebrated the discovery by Friedrich August Kekulé von Stradonitz of the structure of benzene, a ring-shaped aromatic hydrocarbon. At this meeting Kekulé related that the structure of benzene came to him as a reverie of a snake seizing its own tail (the ancient symbol called the Ouroboros).
Since this is quite a celebrated case of the scientific use of the imagination I quote Kekule’s account of the events in full:
“During my stay in Ghent, Belgium, I occupied pleasant bachelor quarters in the main street. My study, however, was in a narrow alleyway and had during the day time no light. For a chemist who spends the hours of daylight in the laboratory this was no disadvantage. I was sitting there engaged in writing my text-book; but it wasn't going very well; my mind was on other things. I turned my chair toward the fireplace and sank into a doze. Again the atoms were flitting before my eyes. Smaller groups now kept modestly in the background. My mind's eye, sharpened by repeated visions of a similar sort, now distinguished larger structures of varying forms. Long rows frequently close together, all, in movement, winding and turning like serpents! And see! What was that? One of the serpents seized its own tail and the form whirled mockingly before my eyes. I came awake like a flash of lightning. This time also [he had had fruitful dreams before] I spent the remainder of the night working out the consequences of the hypothesis. If we learn to dream, gentlemen, then we shall perhaps find truth…” Berichte der deutschen chemischen Gesellsehaft, 1890, 1305-1307 (in Libby 1922).
In supporting his argument about the positive role of the imagination John Tyndall quoted Sir Benjamin Brodie, the chemist, who wrote that the imagination (”that wondrous faculty”) when it is “properly controlled by experience and reflection, becomes the noblest attribute of man”. Brodie cautioned, however, that the imagination when “left to ramble uncontrolled, leads us astray into a wilderness of perplexities and errors…”
The philosopher Vigil Aldrich provided an interesting example of how imagination could be a hindrance to science. Sir Arthur Stanley Eddington, the English astrophysicist, referred frequently, according to Aldrich, to “the world outside us”. Consciousness, in contrast, can be described as being “inside of us.” Using such images Eddington was, said Aldrich, “under the spell of the telephone-exchange analogy.” Where the nerve ending leave off the world beyond us takes over. If the telephone exchange image seems ill-chosen, the image, after all, could be worse. One might imagine inner consciousness as a submarine and from our berth within it we come to know the outside world by means of a periscope! Now, Eddington did not use this image (others did) but when we try to make sense of it we can do so only by saying that inner consciousness is like a submarine only when one supposes that it is nothing at all like a submarine. One must “tone down the analogy” to make it useful. If you do otherwise “the lively imagination begins to protest”. Aldrich speculated that theorists persists with inept picture-making because when toned down, it often appeared as if the image is illuminating even when it is not. Moreover, a flashy image is entertaining. Thus one can easily make the “pleasant mistake” of identifying the image with the “real meaning” of an assertion.
A strength of environmental disciplines is that they bring into proximity bodies of knowledge that are often set apart. Though some quibble with him on this, historian of ecology Donald Worster places both Charles Darwin, the philosophical scientist and Henry David Thoreau the scientific philosopher at the ground of ecology as a natural scientific discipline. And though it is fair to say that ecology has maintained an identity largely separate from the environmentalisms it has inspired, nevertheless ecology and environmentalisms have been good conversation partners. Both have listened to an admirable degree to its poets, artists and philosophers. A good thing this may be in many ways, but my contention here is that the environmental sciences and the practices associated with them — environmentalisms like sustainability — are prone of taking their most arresting images too literally. I wonder if there is not in environmental thought a pathology of the imagination? Too readily, it seems, we transform a provocative image into a proven hypothesis; we smuggle ancient and baffling worldviews into contemporary conceptions of nature.
I sketch a few examples here to illustrate the case. Perhaps you will have ones that you can add.
Nature as an Organism
You are justified in calling Nature your Mother if you have a mother who wants you dead. A Mother who inculcated both your limitations and your accomplishments. Nature: A Mother who birthed a world equipped with tooth and nail and hungry eye; whose family tie is the ripping of flesh. Why, I wonder, are we quick to demand of God an explanation of evil but incline less to asking that question of Mother Nature?
To call Nature our mother is just one manifestation of the image of the Earth as organism. It is enduring, compelling and surely wrong-footing.
University of Wisconsin historian Frank N. Egerton traces the myth of cosmos as organism back to Plato. Timaeus asked “In the likeness of what animal did the Creator make the world?” He then speculated as follows: “For the Deity, intending to make this world like the fairest and most perfect of intelligible beings, framed one visible animal comprehending within itself all other animals of a kindred nature.” Because of Plato’s fateful influence on the history of western thought, Egerton noted that the implications of this myth have been enduring. According to Egerton the myth is the source of two related concepts “the supraorganismic balance-of-nature concept and the microcosm-macrocosm concept.” The supraorganismic concept views the cosmos as having the attributes of a living thing whereas the microcosm-macrocosm concept takes different parts of the universe to correspond with an organismal body.
Both flavors of the organismal concept get expressed in ecosystem ecology. Natural ecosystems, the influential University of Georgia ecology Eugene Odum asserted, are integrated wholes, and developed in a manner that parallels the development of individual organisms or human societies. The development of the natural systems, ecological succession in other words, is orderly, predictable, and directional. It leads, in Odum’s view of things, to a stabilized ecosystem with predictable ratios of biomass, productivity, respiration and so forth. The “strategy” of ecosystem development, as Odum called it, corresponds to the “strategy” for long-term evolutionary development of the biosphere – “namely, increased control of, or homeostasis with, the physical environment in the sense of achieving maximum protection from its perturbations.” Homeostasis etymologically derives from the Greek “standing-still” and in the sense that Odum meant to imply, indicates a dynamic and regulated stability. In other words, the stability of the organism.
Odum does not stand here accused of covertly importing the organismal image into his work; he was quite explicit about it. There is much to admire in Odum’s work and the ecology that he inspired, but the sense of design and purpose that it implied in nature (what philosophers call teleology) put Odum's ecosystem ecology at loggerheads with contemporary evolutionary theory which insists on the purposelessness of nature. It has taken quite some time to reconcile ecosystem thought with evolutionary theory.
Another example of the superorganism’s baleful influence can be found in the Gaia hypothesis. In his preface to Gaia: A New Look at Life on Earth (1979) Lovelock wrote:
“The concept of Mother Earth or, as the Greeks called her long ago, Gaia, has been widely held throughout history and has been the basis of a belief which still coexists with the great religions."
If the development of James Lovelock and Lynn Margulis’s Gaia hypothesis is anything to go by, hypotheses about the workings of nature derived from the organismal image of nature have a shelf life of a decade or so. Lovelock’s Gaia: A New Look at Life on Earth was published in 1979 and he rescinded the teleological claims of the Gaia hypothesis by 1988 in his book Ages of Gaia — or at least he became attentive to the problems that the superorganism concept created. He still maintains that the Earth’s atmosphere is homeostatically regulated but he admitted to not having been led astray by the sirens of the superorganism.
It is a banality of the ecological sciences to state that everything is connected. That ebullient Scot, and eventual stalwart of the American wilderness movement, John Muir, provided the image. He wrote, "When we try to pick out anything by itself, we find it hitched to everything else in the universe."
And if such statements are employed to sponsor a notion that individual organisms cannot be regarded in isolation from those that they consume, and those that can consume them, or furthermore, that as a consequence of the deep intersections of the living and the never-alive, that there can been unforeseen consequences flowing from species additions or removals from ecosystems, then few may argue with this. However, just as the ripples of a stone dropped in a still pond propagate successfully only to its edges (though they may entrain delightful patterns in the finest of its marginal sands), not every ecological event has intolerably large costs to exact. True, if the dominoes line-up and the circumstances are just so, a butterfly’s wing beat over the Pacific may hurl a typhoon against its shores, but more often than not such lepidopterous catastrophes do not come to pass.
Ecosystems, energized so that matter cycles and conjoins the living with the dead, have their lines of demarcation, borders defined by their internal interactions being more powerful than their external ones. They are therefore buffered against many potentially contagious disasters. This, of course, is the essence of resilience - the capacity of a system to absorb disturbance without disruption to habitual structure and function. Ecology is as much the science investigating the limits of connections as it is the thought that everything is connected.
The Community Concept
Is there a greater 20th Century American environmental thinker than Aldo Leopold? Certainly there few that provided as many genuinely poetic images: in the eyes of a dying wolf he saw “a fierce green fire”, he exhorted us to “think like a mountain”, he depicted the crane as “wilderness incarnate”. For all of that, has Leopold not led us astray, with images associated with of the “ethical sequence”? Leopold’s influential land ethic “enlarges the boundaries of the community concept.” The ethical sequence that he proposed progresses stutteringly from free men, to women, to slaves, to animals, plants, rocks and land. It has a compelling lucidity. Leopold admitted, however, that it seems a little too simple. The ethic invites us into community with the land. A person’s self-image will change under a land ethic: “In short,” Leopold writes “a land ethic changes the role of Homo sapiens from conqueror plain member and citizen of it.”
Now, Leopold is a subtle thinker and knows not to confuse the image with the thing. Certainly he expected this transformation to take quite some time. The land ethic would not emerge without “an internal change in our intellectual emphases, loyalties, affections, and convictions.” Now I have little problem with the image of extending the ethical circle other than noting that it makes it seem easier than it has proven to be. My more serious objection concerns the rather thin notion of community that seems to be implied in Leopold image of the plain citizen. As environmental philosopher William Jordan III has illustrated in his book The Sunflower Forest (2003), missing from Leopold’s account is any acknowledgment of the negative elements of the human experience of community: envy, selfishness, fear, hatred, and shame. As Jordan pointed out this leads Leopold and others to “a sentimental, moralizing philosophy that…insists on the naturalness of humans…but that neglects or downplays the radical difficulty of achieving such a sense of self, and also downplays the role of culture and cultural institutions in carrying out this work.” If Leopold’s image of the community and our place within it is an impoverished one, the work of extending the circle becomes impossible.
There are other images that we might have discussed here. Ones that have had, at times at least, unfortunate implications for environmental thinking. For instance, in 1864 George Perkins Marsh wrote that mankind is disruptive, not just occasionally, mind you, but “is everywhere a disturbing agent.” One hundred years later the Wilderness Act renews the image in the definition of wilderness as an area “untrammeled by man.” We might have considered contemporary accounts of social-ecological systems where these systems are posited as a compound substance, but that in depicting them, we tease the components apart again.
So, if environmental thought and ecological science has been susceptible to what my colleague and friend Professor David Wise of University of Illinois, Chicago, has called “malicious metaphors”, is there a more productive way to think about the role of the image in developing environmental thought?
The work of French philosopher Gaston Bachelard (1884 - 1862) — one of the more lovable of the French phenomenologists, certainly the hairiest — is helpful in sorting out of a productive role for the imagination in science. He was renowned for his work on epistemological issues in science as well as for his phenomenological account of the poetic image, and his philosophical meditation on reverie. As much as he was a materialist in his approach to science, he was subjective and personal (as a matter of theoretical orientation) in his philosophical work on the imagination.
Bachelard’s work on first glance is so inviting. Chapters in his book The Poetics of Space (1958) have enticing titles like The House from Cellar to Garret, Nests, Shells. Perhaps this is why the book is a philosophic bestseller. My copy claims “more than 80,000 copies sold”. And though indeed opening a Bachelard book is like relaxing into a warm bath, nevertheless there is an astringent in those waters. The thought is somewhat obscure as Bachelard ransacks the lexicon of the various disciplines he brings together in his work: Kantian philosophy, Husserlian phenomenology, Jungian psychoanalysis etc. Oftentimes his use of technical terms was novel; reinterpreting them, Bachelard pushed them into new service. Because of this density, I wonder how many of those 80,000 copies have languished on bookshelves? Mine certainly did until the past few weeks.
To enjoy the fruits of Bachelard’s insights we should do at least some of the work of appreciating how he produced them. In the hope that this will embolden you to return to your copy of The Poetics of Space, or other works by Bachelard on the imagination, or pick them up for the first time, I will give a summary, as best I understand it, of what his phenomenology of the image is all about. I am, I should tell you, strictly an amateur Bachelardian.
The poetic image is eruptive for both poet and reader. Bachelard say that for its creation “the flicker of the soul is all that is needed.” So, every great image is its own origin. Famously, Bachelard maintained that the imagination, contrary to view of many philosophical accounts, is “the faculty of deforming images offered by perception.” The poetic image emerges into the consciousness as a direct product of “the heart, soul and being of man.” Elsewhere Bachelard claims “the imagination [is] a major power of the human nature.”
The poetic image is therefore not caught up in a network of causalities. Our first recourse should not be to ask what archetypes an image represents, or what aspects of the poet’s psycho-biography explains it away. In this assertion Bachelard remains true to phenomenology’s maxim of going “back to the things themselves.” In as much as such things are possible, one approaches the poetic image freed from all presuppositions.
So it is of secondary importance to ask where an artistic image comes from; what matters more is to explore what opportunities for freedom an image creates. Instead of cause and effect, at the center point of which we traditionally ask the image to stand, rather we might speak of the “resonances and reverberations” of the image. This is not, I think, just some fanciful softening of language, it is a necessary acknowledgment of the way in which an image does not simply reflect a memory, but rather revives an absent one and the way in which an image explodes into images. When we read the poetic image it resonates, when we communicate it it reverberates. The repercussions of the image, said Bachelard, “invite us to give greater depth to our own existence.” What bearing does an image have on our freedom? A great piece of art, Bachelard says “awakens images that have been effaced, at the same time that it confirms the unforeseeable nature of speech. And if we render speech unforeseeable, is this not an apprenticeship to freedom?”
I propose that Gaston Bachelard’s phenomenological account of the poetic image, despite its somewhat unpromising obscurity, is helpful in addressing environmental thought’s special porousness to striking images. In this short sketch I cannot fully substantiate the claim. I will end, however, with an example where an approach such as Bachelard’s seems to have been fruitful.
Tim Morton is one of the most widely read and exciting environmental writers of recent years. As far as I know has not cited Bachelard as a methodological inspiration, although his work is phenomenological and existential. [Added: One of Morton's earlier books on the representation of the spice trade in Romantc Literature was entitled Poetics of Spice (2006) - making him, it would seem, an explicit Bachelardian after all!]. Morton is so concerned about the potential of sedimented ideas leading us into Sir Benjamin Brodie’s “wilderness of perplexities and errors”, that he elected to drop the term “Nature” altogether. In his book Ecology Without Nature (2007) he explained the problem: “…the idea of nature is getting in the way of properly ecological forms of culture, philosophy, politics, and art.”
The results of Morton’s analysis lead us to strange, perplexing, though ultimately interesting places. Out of this natureless ecology comes a suite of insights on “dark ecology”, an ecology reminding us that we are always already implicated in the ecological. There is no outside from which we get a guilt-free view of the fantastic mess. Deriving also from an ecology developed without a sentimental view of nature comes a fresh analysis of connectedness. Morton revives Muir’s hitching image but this time its resonances are weirder than the oceanic feeling that we are all blissfully in this together. His analysis gives us the queer bestiary of “strange strangers” with which we are stickily intimate, and yet we can never fully get to know. Morton develops this account in The Ecological Thought (2010) which I recommend to you. I am not supposing that this is an adequate summary of Morton’s recent books, but I think that Tim is converging on the idea of resonances and reverberations that Bachelard has written about.
The image, and the imagination, can play a positive role in environmental thinking. Darwin’s image of the “tangled bank” is both a pretty and useful way of thinking about the way in which the organismal profusion developed from a common ancestor. But a misapplied image can be a disaster. Understanding our responsibilities with respect to the image is the work of the future, it is the work that will birth the future.
Walter Libby The Scientific Imagination The Scientific Monthly, Vol. 15, No. 3 (Sep., 1922), pp. 263-270
Monday, January 07, 2013
A Parched Future: Global Land and Water Grabbing
by Jalees Rehman
“This is the bond of water. We know the rites. A man’s flesh is his own; the water belongs to the tribe.” Frank Herbert - Dune
Land grabbing refers to the large-scale acquisition of comparatively inexpensive agricultural land in foreign countries by foreign governments or corporations. In most cases, the acquired land is located in under-developed countries in Africa, Asia or South America, while the grabbers are investment funds based in Europe, North America and the Middle East. The acquisition can take the form of an outright purchase or a long-term-lease, ranging from 25 to 99 years, that gives the grabbing entity extensive control over the acquired land. Proponents of such large-scale acquisitions have criticized the term “land grabbing’ because it carries the stigma of illegitimacy and conjures up images of colonialism or other forms of unethical land acquisitions that were so common in the not so distant past. They point out that land acquisitions by foreign investors are made in accordance with the local laws and that the investments could create jobs and development opportunities in impoverished countries. However, recent reports suggest that these land acquisitions are indeed “land grabs”. NGOs and not-for profit organizations such as GRAIN, TNI and Oxfam have documented the disastrous consequences of large-scale land acquisitions for the local communities. More often than not, the promised jobs are not created and families that were farming the land for generations are evicted from their ancestral land and lose their livelihood. The money provided to the government by the investors frequently disappears into the coffers of corrupt officials while the evicted farmers receive little or no compensation.
One aspect of land grabbing that has received comparatively little attention is the fact that land grabbing is invariably linked to water grabbing. When the newly acquired land is used for growing crops, it requires some combination of rainwater (referred to as “green water”) and irrigation from freshwater resources (referred to as “blue water”). The amount of required blue water depends on the rainfall in the grabbed land. For example, land that is grabbed in a country with heavy rainfalls, such as Indonesia, may require very little irrigation and tapping of its blue water resources. The link between land grabbing and water grabbing is very obvious in the case of Saudi Arabia, which used to be a major exporter of wheat in the 1990s, when there were few concerns about the country’s water resources. The kingdom provided water at minimal costs to its heavily subsidized farmers, thus resulting in a very inefficient usage of the water. Instead of the global average of using 1,000 tons of water per ton of wheat, Saudi farmers used 3,000 and 6,000 tons of water. Fred Pearce describes the depletion of the Saudi water resources in his book The Land Grabbers:
Saudis thought they had water to waste because, beneath the Arabian sands, lay one of the world’s largest underground reservoirs of water. In the late 1970s, when pumping started, the pores of the sandstone rocks contained around 400 million acre-feet of water, enough to fill Lake Erie. The water had percolated underground during the last ice age, when Arabia was wet. So it was not being replaced. It was fossil water— and like Saudi oil, once it is gone it will be gone for good. And that time is now coming. In recent years, the Saudis have been pumping up the underground reserves of water at a rate of 16 million acre-feet a year. Hydrologists estimate that only a fifth of the reserve remains, and it could be gone before the decade is out.
Saudi Arabia responded to this depletion of its water resources by deciding to gradually phase out all wheat production. Instead of growing wheat in Saudi Arabia, it would import wheat from African farmlands that were leased and operated by Saudi investors. This way, the kingdom could conserve its own water resources while using African water resources for the production of the wheat that would be consumed by Saudis.
The recent study “Global land and water grabbing” published in the Proceedings of the National Academy of Sciences (2013) by Maria Rulli and colleagues examined how land grabbing leads to water grabbing and can deplete the water resources of a country. The basic idea is that when the grabbed land is irrigated, the use of freshwater resources reduces the availability of irrigation water for neighboring farmland areas, i.e. the areas that have not been grabbed. This in turn can cause widespread water stress and affect the ability of other farmers to grow crops, ultimately leading to poverty and social unrest. Land grabbing is often shrouded in secrecy since local governments do not want to be perceived as selling off valuable land to foreigners, but some details regarding the size of the land grab are eventually made public. The associated water needs of the investors that grab the land are even less clear and very little is publicly divulged about how the land grabbing will affect the water availability for other farmers. In the case of Sudan, for example, grabbed land is often located on the fertile banks of the Blue Nile and while large-scale commercial farmland is expanding as part of the foreign investments, local farmers are losing access to land and water and gradually becoming dependent on food aid, even though Sudan is a major exporter of food produced by the large-scale farms.
Using the global land grabbing database of GRAIN and the Land Matrix Database, Rulli and colleagues analyzed the extent of land-grabbing and identify the Democratic Republic of Congo (8.05 million hectares), Indonesia (7.14 million hectares), Philippines (5.17 million hectares), Sudan (4.69 million hectares) and Australia (4.65 million hectares) as the five countries in which the most area of land has been grabbed by foreign investors. The total amount of grabbed land in these five countries is 29.7 million hectares, and accounts for nearly 63% of global land grabbing. To put this in perspective, the size of the United Kingdom is 24.4 million hectares.
The researchers calculated the amount of rainfall (green water) on the grabbed land, which is the minimum amount of water that would be grabbed with the acquisition of the land. However, since the grabbed land is also used for agriculture and many crops require additional freshwater irrigation (blue water), the researchers also determined a range of predicted blue water grabbing for land irrigation. For the low end of the blue water grabbing range, the researchers assumed that the land would be irrigated in the same fashion as other agricultural land in the country. On the higher end of the range, the researchers also calculated how much blue water would be grabbed, if the investors irrigated the land in a manner to maximize the agricultural production of the land. This is not an unreasonable assumption, since foreign investors probably do have the financial resources to maximally irrigate the acquired land in a manner that maximizes the return on their investment.
Rulli and colleagues estimated that global land grabbing is associated with the grabbing of 308 billion m3 of green water (i.e. rain water) and an additional grabbing of blue water that can range from 11 billion m3 (current irrigation practices) to 146 billion m3 (maximal irrigation) per year. Again, to put these numbers in perspective, the average daily household consumption of water in the United Kingdom is 150 liters (0.15 m3) per person. This results in a total annual household consumption of 3.5 billion m3 (0.15 m3 X 365 days X 63,181,775 UK population) of water in the UK. Therefore, the total household water consumption in the UK is a fraction of what would be the predicted blue water usage of the grabbed land, even if one were to use very conservative estimates of required irrigation.
The researchers then also list the top 25 countries in which the investors are based that engage in land and water grabbing. They find that about “60% of the total grabbed water is appropriated, through land grabbing, by the United States, United Arab Emirates, India, United Kingdom, Egypt, China, and Israel”. The researchers gloss over the fact that in many cases, land and associated water resources are grabbed by foreign investment groups and not by foreign governments. Just because certain investment funds are based in Singapore, UK or the United Arab Emirates does not mean that these countries are “appropriating” the land or water. In fact, many investment groups that are involved in land grabbing may have multinational investors or investors whose nationality is not disclosed. Nevertheless, there are probably cases in which land and water grabbing are not merely conducted as a form of private investment, but might involve foreign governments. One such example is the above-mentioned case of Saudi Arabia, in which the Saudi government actively encouraged and helped Saudi investors to acquire agricultural land in Africa. While perusing the list of the top 25 countries in which land and water grabbing investors are based, one cannot help but notice that the list contains a number of Middle Eastern countries that are themselves experiencing severe water stress and scarcity, such as Saudi Arabia, Qatar, United Arab Emirates or Israel. Transferring their water burden to Africa by acquiring agricultural land would allow them to preserve their own water resources and may indeed by of strategic value to these countries. However, the precise degree of government involvement in these investment decisions often remains unclear.
The paper by Rulli and colleagues is an important reminder of how land grabbing and water grabbing are entwined and that land grabbing could potentially deplete valuable water resources from under-developed countries, especially in Africa, which accounts for more than half of the globally grabbed land. Even villagers that continue to own and farm their own land adjacent to the large-scale farms on grabbed lands could be affected by new forms of water stress, especially if the foreign investors decide to maximally irrigate the acquired land. There are some key limitations to the study, such as the lack of distinction between private foreign investors or foreign governments that are engaged in land grabbing and the fact that all the calculations of blue water grabbing are based on very broad estimates without solid data on how much blue water is actually consumed by the grabbed lands. These numbers may be very difficult to obtain, but should be the focus of future studies in this area.
After reading this study, I have become far more aware of ongoing land and water grabbing. Excessive commodification of our lives was already criticized by Karl Polanyi in 1944 and now that water is also becoming a “fictitious commodity”, we have to be extremely watchful of its consequences. The extent of land grabbing that has already taken place is quite extensive. An interactive map based on the GRAIN database allows us to visualize the areas in the world that are most affected by land grabbing since 2006 as well as where the foreign investors are located. The map shows that in recent years, Pakistan has emerged as one of the prime targets of land grabbing in Asia, while Sudan, South Sudan, Tanzania and Ethiopia are major targets of recent land grabbing in Africa. The world economic crisis and the recent food price crisis will likely increase the degree of land grabbing and associated water grabbing. The targets of land grabbing are often countries with fragile economies, widespread poverty and significant malnourishment.
As a global society, we have to ensure that people living in these countries do not suffer as a consequence of land grabbing deals. The recent “Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security” released by the FAO are an important step in the right direction, because they attempt to provide food security for all, even when large-scale land acquisitions occur. However, they do not specify water access and they are, as the title reveals, “voluntary”. It is not clear who will abide by them. Therefore, we also need a complementary approach in which clients of land grabbing investment funds ask the fund managers to abide by the FAO guidelines and that they maximally ensure food security and water access for the general population in grabbed lands. One specific example is that of the American retirement fund TIAA-CREF (Teachers Insurance and Annuity Association – College Retirement Equities Fund) which is one of the leading retirement providers for people who work in education, research and medicine. Investment in agriculture and land grabbing appears to be a priority for TIAA-CREF, but American educators or academics that use TIAA-CREF as their retirement fund could use their leverage to ensure socially conscientious investments. Even though land and water grabbing are becoming a major concern, the growing awareness of the problem may also result in solutions that limit the negative impact of land and water grabbing.
Image Credits: Wikimedia - Drought by Tomas Castelazo / Wikimedia - The Union of Earth and Water by Rubens
Monday, December 10, 2012
There Was No Couch: On Mental Illness and Creativity
by Jalees Rehman
The psychiatrist held the door open for me and my first thought as I entered the room was “Where is the couch?”. Instead of the expected leather couch, I saw a patient lying down on a flat operation table surrounded by monitors, devices, electrodes, and a team of physicians and nurses. The psychiatrist had asked me if I wanted to join him during an “ECT” for a patient with severe depression. It was the first day of my psychiatry rotation at the VA (Veterans Affairs Medical Center) in San Diego, and as a German medical student I was not yet used to the acronymophilia of American physicians. I nodded without admitting that I had no clue what “ECT” stood for, hoping that it would become apparent once I sat down with the psychiatrist and the depressed patient.
I had big expectations for this clinical rotation. German medical schools allow students to perform their clinical rotations during their final year at academic medical centers overseas, and I had been fortunate enough to arrange for a psychiatry rotation in San Diego. The University of California (UCSD) and the VA in San Diego were known for their excellent psychiatry program and there was the added bonus of living in San Diego. Prior to this rotation in 1995, most of my exposure to psychiatry had taken the form of medical school lectures, theoretical textbook knowledge and rather limited exposure to actual psychiatric patients. This may have been part of the reason why I had a rather naïve and romanticized view of psychiatry. I thought that the mental anguish of psychiatric patients would foster their creativity and that they were somehow plunging from one existentialist crisis into another. I was hoping to engage in some witty repartee with the creative patients and that I would learn from their philosophical insights about the actual meaning of life. I imagined that interactions with psychiatric patients would be similar to those that I had seen in Woody Allen’s movies: a neurotic, but intelligent artist or author would be sitting on a leather couch and sharing his dreams and anxieties with his psychiatrist.
I quietly stood in a corner of the ECT room, eavesdropping on the conversations between the psychiatrist, the patient and the other physicians in the room. I gradually began to understand that that “ECT” stood for “Electroconvulsive Therapy”. The patient had severe depression and had failed to respond to multiple antidepressant medications. He would now receive ECT, what was commonly known as electroshock therapy, a measure that was reserved for only very severe cases of refractory mental illness. After the patient was sedated, the psychiatrist initiated the electrical charge that induced a small seizure in the patient. I watched the arms and legs of the patients jerk and shake. Instead of participating in a Woody-Allen-style discussion with a patient, I had ended up in a scene reminiscent of “One Flew Over the Cuckoo's Nest”, a silent witness to a method that I thought was both antiquated and barbaric. The ECT procedure did not take very long, and we left the room to let the sedation wear off and give the patient some time to rest and recover. As I walked away from the room, I realized that my ridiculously glamorized image of mental illness was already beginning to fall apart on the first day of my rotation.
During the subsequent weeks, I received an eye-opening crash course in psychiatry. I became acquainted with DSM-IV, the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders which was the sacred scripture of American psychiatry according to which mental illnesses were diagnosed and classified. I learned ECT was reserved for the most severe cases, and that a typical patient was usually prescribed medications such as anti-psychotics, mood stabilizers or anti-depressants. I was surprised to see that psychoanalysis had gone out of fashion. Depictions of the USA in German popular culture and Hollywood movies had led me to believe that many, if not most, Americans had their own personal psychoanalysts. My psychiatry rotation at the VA took place in the mid 1990s, the boom time for psychoactive medications such as Prozac and the concomitant demise of psychoanalysis.
I found it exceedingly difficult to work with the DSM-IV and to appropriately diagnose patients. The two biggest obstacles I encountered were a) determining cause –effect relationships in mental illness and b) distinguishing between regular human emotions and true mental illness. The DSM-IV criteria for diagnosing a “Major Depressive Episode”, included depressive symptoms such as sadness or guilt which were severe enough to “cause clinically significant distress or impairment in social, occupational, or other important areas of functioning”. I had seen a number of patients who were very sad and had lost their job, but I could not determine whether the sadness had impaired their “occupational functioning” or whether they had first lost their job and this had in turn caused profound sadness. Any determination of causality was based on the self-report of patients, and their memories of event sequences were highly subjective.
The distinction between “regular” human emotions and mental illness was another challenge for me and the criteria in the DSM-IV manual seemed so broad that what I would have considered “sadness” was now being labeled as a Major Depression. A number of patients that I saw had severe mental illnesses such as depression, a condition so disabling that they could hardly eat, sleep or work. The patient who had undergone ECT on my first day belonged to that category. However, the majority of patients exhibited only some impairment in their sleep or eating patterns and experienced a degree of sadness or anxiety that I had seen in myself or my friends. I had considered transient episodes of anxiety or unhappiness as part of the spectrum of human emotional experience. The problem I saw with the patients in my psychiatry rotation was these patients were not only being labeled with a diagnosis such as “Major Depression”, but were then prescribed antidepressant medications without any clear plan to ever take them off the medications. By coincidence, that year I met the forensic psychiatrist Ansar Haroun, who was also on faculty at UCSD and was able to help me with my concerns. Due to his extensive work in the court system and his rigorous analysis of mental states for legal proceedings, Haroun was an expert on causality in psychiatry as well the definition of what constitutes a truly pathological mental state.
Regarding the issue of causality, Haroun explained to me the complexity of the mind and mental states makes it extremely difficult to clearly define cause and effect relationships in psychiatry. In infectious diseases, for example, specific bacteria can be identified by laboratory tests as causes of a fever. The fever normally does not precede the bacterial infection nor does it cause the bacterial infection. The diagnosis of mental illnesses, on the other hand, rests on subjective assessments of patients and is further complicated by the fact that there are no clearly defined biological causes or even objective markers of most mental illnesses. Psychiatric diagnoses are therefore often based on patterns of symptoms and a presumed causality. If a patient exhibits symptoms of a depressed mood and has also lost his or her job during that same time period, psychiatrists then have to diagnose whether the depression was the cause of losing the job or whether the job loss caused depressive symptoms. In my limited experience with psychiatry and the many discussions I have had with practicing psychiatrists, it appears that the leeway given to psychiatrists to assess cause-effect relationships may result in an over-diagnosis of mental illnesses or an over-estimation of their impact.
I also learnt from Haroun that the question of how to address the distinction between the spectrum of “regular” human emotions and actual mental illness had resulted in a very active debate in the field of psychiatry. Haroun directed me towards the writings of Tom Szasz, who was a brilliant psychiatrist but also a critic of psychiatry, repeatedly pointing out the limited scientific evidence for diagnoses of mental illness. Szasz’ book “The Myth of Mental Illness” was first published in 1960 and challenged the foundations of modern psychiatry. One of his core criticisms of psychiatry was that his colleagues had begun to over-diagnose mental illnesses by blurring the boundaries between everyday emotions and true diseases. Every dis-ease (discomfort) was being turned into a disease that required a therapy. The reasons for this overreach by psychiatry were manifold, ranging from society and the state trying to regulate what was acceptable or normal behavior to psychiatrists and pharmaceutical companies that would benefit financially from the over-diagnosis of mental illness. An excellent overview of his essays can be found in his book “The Medicalization of Everyday Life”. Even though Tom Szasz passed away earlier this year, psychiatrists and researchers are now increasingly voicing their concerns about the direction that modern psychiatry has taken. Allan Horwitz and Jerome Wakefield, for example, have recently published “The Loss of Sadness: How Psychiatry Transformed Normal Sorrow into Depressive Disorder” and “All We Have to Fear: Psychiatry's Transformation of Natural Anxieties into Mental Disorders”. Unlike Szasz who even went as far as denying the existence of mental illness, Horowitz and Wakefield have taken a more nuanced approach. They accept the existence of true mental illnesses, admit these illnesses can be disabling and acknowledge the patients who are afflicted by mental illnesses do require psychiatric treatment. However, Horowitz and Wakefield criticize the massive over-diagnosis of mental illness and point out the need to distinguish true mental illnesses from normal sadness and anxiety.
Before I started my psychiatry rotation in San Diego, I had been convinced that mental illness fostered creativity. I had never really studied the question in much detail, but there were constant references in popular culture, movies, books and TV shows to the creative minds of patients with mental illness. The supposed link between mental illness and creativity was so engrained in my mind that the word “psychotic” automatically evoked images of van Gogh’s paintings and other geniuses whose creative minds were fueled by the bizarreness of their thoughts. Once I began seeing psychiatric patients who truly suffered from severe disabling mental illnesses, it became very difficult for me to maintain this romanticized view of mental illness. People who truly suffered from severe depression had difficulties even getting out of bed, getting dressed and meeting their basic needs. It was difficult to envision someone suffering from such a disabling condition to be able to write large volumes of poetry or to analyze the data from ground-breaking experiments. The brilliant book “Creativity and Madness: New Findings and Old Stereotypes” by Albert Rothenberg helped me understand that the supposed link between creativity and mental illness was primarily based on myths, anecdotes and a selection bias in which the creative accomplishments of patients with mental illness were glorified and attributed to the illness itself. Geniuses who suffered from schizophrenia or depression were not creative because of their mental illness but in spite of their mental illness.
I began to realize that the over-diagnosis of mental illness and the departure of causality that had become characteristic for contemporary psychiatry also helped foster the myth that mental illness enhances creativity. Many beautiful pieces of literature or art can be inspired by emotional states such as the sadness of unrequited love or the death of a loved one. Creativity is often a response to a state of discomfort or dis-ease, an attempt to seek out comfort. However, if definitions of mental illness are broadened to the extent that nearly every such dis-ease is considered a disease, one can easily fall into the trap of believing that mental illness indeed begets creativity. In respect to establishing causality, Rothenberg found, contrary to the prevailing myth, mental illness was actually a disabling condition that prevented creative minds from completing their artistic or scientific tasks. A few years ago, I came across “Poets on Prozac: Mental Illness, Treatment, and the Creative Process” a collection of essays written by poets who suffer from mental illness. The personal accounts of most poets suggest that their mental illnesses did not help them write their poetry, but actually acted as major hindrances. It was only when their illness was adequately treated and they were in a state of remission that they were able to write poems. A recent comprehensive analysis of studies that attempt to link creativity and mental illness can be found in the excellent textbook “Explaining Creativity: The Science of Human Innovation” by Keith Sawyer, who concludes that there is no scientific evidence for the claim that mental illness promotes creativity. He also points to a possible origin of this myth:
The mental illness myth is based in cultural conceptions of creativity that date from the Romantic era, as a pure expression of inner inspiration, an isolated genius, unconstrained by reason and convention.
I assumed that the myth had finally been laid to rest, but, to my surprise I came across the headline Creativity 'closely entwined with mental illness' on the BBC website in October 2012. The BBC story was referring to the large-scale Swedish study “Mental illness, suicide and creativity: 40-Year prospective total population study” by Simon Kyaga and his colleagues at the Karolinska Institute, published online in the Journal of Psychiatric Research. The BBC news report stated “Creativity is often part of a mental illness, with writers particularly susceptible, according to a study of more than a million people” and continued:
Lead researcher Dr Simon Kyaga said the findings suggested disorders should be viewed in a new light and that certain traits might be beneficial or desirable.
For example, the restrictive and intense interests of someone with autism and the manic drive of a person with bipolar disorder might provide the necessary focus and determination for genius and creativity.
Similarly, the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece.
These statements went against nearly all the recent scientific literature on the supposed link between creativity and mental illness and once again rehashed the tired, romanticized myth of the mentally ill genius. I was puzzled by these claims and decided to read the original paper. There was the additional benefit of learning more about the mental health of Swedes, because my wife is a Swedish-American. It never hurts to know more about the mental health or the creative potential of one’s spouse.
Kyaga’s study did not measure creativity itself, but merely assessed correlations between self-reported “creative professions” and the diagnoses of mental illness in the Swedish population. Creative professions included scientific professions (primarily scientists and university faculty members) as well as artistic professions such as visual artists, authors, dancers and musicians. The deeply flawed assumption of the study was that if an individual has a “creative profession”, he or she has a higher likelihood of being a creative person. Accountants were used as a “control”, implying that being an accountant does not involve much creativity. This may hold true for Sweden, but the creativity of accountants in the USA has been demonstrated by the recent plethora of financial scandals. The size of the Kyaga study was quite impressive, involving over one million patients and collecting data on the relatives of patients. The fact that Sweden has a total population of about 9.5 million and that more than one million of its adult citizens are registered in a national database as having at least one mental illness is both remarkable and worrisome.
The main outcome was the likelihood that patients with certain mental illnesses such as depression, schizophrenia or anxiety disorders were engaged in a “creative profession”. The results of the study directly contradicted the BBC hyperbole:
We found no positive association between psychopathology and overall creative professions except for bipolar disorder. Rather, individuals holding creative professions had a significantly reduced likelihood of being diagnosed with schizophrenia, schizoaffective disorder, unipolar depression, anxiety disorders, alcohol abuse, drug abuse, autism, ADHD, or of committing suicide.
Not only did the authors fail to find a positive correlation between creative professions and mental illnesses (with the exception of bipolar disorder), they actually found the opposite of what they had suspected: Patients with mental illnesses were less likely to engage in a creative profession.
Their findings do not come as a surprise to anyone who has been following the scientific literature on this topic. After all, the disabling features of mental illness make it very difficult to maintain a creative profession. Kyaga and colleagues also presented a contrived subgroup analysis, to test whether there was any group within the “creative professions” that showed a positive correlation with mental illness. It appears contrived, because they only break down the artistic professions, but did not perform a similar analysis for the scientific professions. Among all these subgroup analyses, the researchers found a positive correlation between the self-reported profession ‘author’ and a number of mental illnesses. However, they also found that other artistic professions did not show such a positive correlation.
How the results of this study gave rise to the blatant misinterpretation reported by the BBC that “the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece” is a mystery in itself. It shows the power of the myth of the mad genius and how myths and convictions can tempt us to misinterpret data in a way that maintains the mythic narrative. The myth may also be an important component in the attempt to medicalize everyday emotions. The notion that mental illness fosters creativity could make the diagnosis more palatable. You may be mentally ill, but don’t worry, because it might inspire you to paint like van Gogh or write poems like Sylvia Plath.
A study of the prevalence of mental illness published in the Archives of General Psychiatry in 2005 estimated that roughly half of all Americans will have been diagnosed with a mental illness by time they reach the age of 75. This estimate was based on the DSM-IV criteria for mental illness, but the newer DSM-V manual will be released in 2013 and is likely to further expand the diagnosis of mental illness. The DSM-IV criteria had made allowance for bereavement to avoid diagnosing people who were profoundly sad after the loss of a loved one with the mental illness depression. This bereavement exemption will likely be removed from the new DSM-V criteria so that the diagnosis of major depression can be used even during the grieving period. The small group of patients who are afflicted with disabling mental illness do not find their suffering to be glamorous. There is a large number of patients who are experiencing normal sadness or anxiety and end up being inappropriately diagnosed with mental illness using broad and lax criteria of what constitutes an illness. Are these patients comforted by romanticized myths about mental illness? The continuing over-reach of psychiatry in its attempt to medicalize emotions, supported by the pharmaceutical industry that reaps large profits from this over-reach, should be of great concern to all of society. We need to wade through the fog of pseudoscience and myths to consider the difference between dis-ease and disease and the cost of medicalizing human emotions.
Image Credit: Wikimedia Commons Public Domain ECT machine (1960s) by Nasko and Self-Portait of van Gogh.
Monday, August 20, 2012
The Rats of War: Konrad Lorenz and the Anthropic Shift
What we might remember most about the London 2012 Olympics are the medal ceremonies. The proud, the tearful, the exhausted, the awestruck, the lip-syncing, and occasionally the unimpressed. We might also call to mind the relative equanimity with which silver and bronze medalists tolerated the national anthems of the winning nation. Nobel laureate Konrad Lorenz (1903-1989), an Austrian zoologist and co-founder with Niko Tinbergen of the field of ethology – the biology of behavior – remarked in his popular book On Aggression (1966) that the Olympic Games are the only occasion when the playing of the anthem of another nation does not arouse hostility. Athletic ideals of fair play and chivalry, he said, balance out national enthusiasm. Olympic sports, you see, have all the virtues of war without all that unpleasant killing and plundering and, importantly, without aggravating international hatred. To surrogate for war, Olympic sports should be as dangerous as possible and should call for a measure of self-sacrifice. This being the case, one wonders why jousting is not an Olympic sport. Perhaps NBC simply chose not to screen it.
The destructive intensity of the aggressive drive that propels us to war is mankind’s hereditary evil, as Lorenz termed it, and its evolutionary origins can be sought in tribal conflict. In the early Stone Age intra-tribal skirmishes would have paid out some evolutionary dividends: dispersion of the population, the selection of the strong and especially in the defense of the brood. But in more contemporary times having overcome our most immediate environmental limitations, that is, not for the most part starving or being prey items, and now that we are equipped with weapons, a more dangerous, indeed an “evil” intra-specific selection prevails. What was once healthy for the species in the form of an instinctive behavior called “militant enthusiasm” has now turned pathological.
Lorenz’s analysis was based upon a lifetime studying a variety of animals, though he is especially known for his bird work. Together with Tinbergen and other classical ethologists he proposed several important hypotheses: behaviors come in constellations of instinctive activities called fixed action patterns; these get released by specific stimuli; the behaviors should be regarded as adaptive response shaped evolutionary forces; the adoption of certain behaviors can be phase specific occurring at certain life stages – for instance, imprinting where young Graylag goslings instinctively mimic their parents, even if the parent is substituted by Lorenz himself! When in 1973 Konrad Lorenz, Niko Tinbergen and Karl von Frisch were awarded the Nobel Prize in Medicine and Physiology for the development of ethology it was recognized that they had created a new science. However, in addition to shedding light of the behavior of lower animals it had implications for “social medicine, psychiatry, and psychosomatic medicine”. If this new discipline had no conceivable bearing on an understanding of the human condition, it is unlikely that the ethologists would have had won a Nobel Prize.
Ethology’s shift from a basic zoological discipline to an applied one was not without controversy among its practitioners, some of whom wanted to restrict it to fundamentals for a more extended period. However, there is, it seems, a special, apparently inevitable, moment in works on animal behavior where the author switches from their account of chimps, bees, fishes, geese, rats or another favored organism and tells us what it means to be human. I call this the anthropic shift. The behavior of the human animal need not be an area of particular expertise for the author; the switch is presumed to be validated by the evolutionary continuity of humans with other animals.
An inclination toward an anthropic shift is anticipated in the work of Charles Darwin. Although the implications of natural selection for humans occupied Darwin for some time before the publication of On the Origin of Species (1859), nevertheless humans are scarcely mentioned in that volume. It took Darwin more than a decade before publishing his version of the anthropic shift which he eventually did in The Descent of Man, and Selection in Relation to Sex (1871) and in The Expression of the Emotions in Man and Animals (1872). One could call this the classic anthropic shift – the author waits a respectful period of time before pronouncing on human affairs.
There are some early attempts in Lorenz’s work to make the implications of his work on the specific behavior of specific organisms apparent for humans including infamously his attempts to reconcile his science with the aims of National Socialism (which I discuss here). It is in Lorenz’s On Aggression, the work of his maturity, where there is a full flowering of his thoughts on human behavior and misbehavior. Although this book is dominated by observations of other animals Lorenz reserves the final chapters of On Aggression for his assessment of human affairs. This version of writing the anthropic shift – the succinct but confident summary of the implications of the study of other animals for human affairs – is characteristic of our age where the scientist has lost all bashfulness in opining on human nature.
In what follows I summarize Lorenz’s diagnosis of the human condition, our current predicament and the remedies he suggested grounded in ethological principles. In the Lorenzian anthropic shift he is attentive to our aggressive tendencies especially the instinctive behavior that he calls militant enthusiasm. If the lessons learned from an ethological inspection of lower animals are correctly applied we might just be able to avert a global catastrophe. Some time soon, no doubt.
An unbiased observer from another planet reflecting on human behavior from a perch close enough to capture the broad strokes of human conduct, but far enough away not to sweat the details of our separate behaviors would surmise that we are rats. Or so Lorenz concluded in On Aggression. The extraterrestrial would infer this based upon the observations that both rats and humans are “social and peaceful beings within their clans, but veritable devils towards all fellow-members of their species not belonging to their own communities.” Our Martian would have more optimism about the future of rats than humans, says Lorenz, since rats stop reproducing when a state-of overcrowding is reached. We do not.
Lorenz provided an edifying, if somewhat chilling, account of rat group-on-group violence, much of which seemingly was worked out in experimental arenas. The work is mainly from one F Steiniger and summarized by Lorenz. Steiniger found that when rats were introduced into an enclosure, aggression grew incrementally after a period of wariness. Once pair formation between male and female rats occurred violence escalated and within a couple of weeks a mated couple typically killed all other residents. Death often came to a rat in the form of peritoneal sepsis – a rat dies of multitude of suppurating cuts. That being said, a skilled rat can deftly inflict a nip on the carotid artery. Exhaustion and nervous-overstimulation leading to adrenal gland disruption were another leading cause of death among beleaguered rats.
The basis of most groups of rats are genetically related families – rat mothers, rat fathers, rat grandparents, rat siblings and rat cousins all getting along with mutual accord. Tender and considerate are rats to members of their family group. Larger animals will, for example, “good humouredly allow smaller one to take pieces of food away from them.” In matters of reproduction they’ll generously step aside and let “half- and three-quarter grown animals…take precedence of the adults.” An intruder, however, is not treated so solicitously and they are routed rapidly and killed by bites. Since rats identify family members by smell, the experimenter can manipulate the odor of an animal and turn a beloved family member into a threatening intruder. Grandpa had never been so bewildered. In one such experiment Lorenz assured the reader, though with a note of apology to the biologist who one supposes will want to view the spectacle to its ghastly end, that the experimental animal was spared his fate and removed into protective custody.
On viewing humans and rats Lorenz’s extraterrestrial may find these species indistinguishable because aspects of their social behavior are so head-scratchingly difficult to fathom. Group hatred between rat-clans and the human appetite for war seem inexplicable viewed functionally. Because of the difficulty in deriving a evolutionary explanation for rat-on-rat attacks from the perspective of natural selection Lorenz obliquely speculated that rat-clan gang fights are the outcome of sexual selection (selection based on differential mating success) where there is “grave danger that members of a species may in demented competition drive each other into the most stupid blind alley of evolution.” But Lorenz is equivocal here, conceding that unknown external factors may still at work. “It is quite possible”, he concluded, that “group hate between rat-clans is really a diabolical invention which serves no good purpose.” That being said, he seems more confident that human group loyalty and generosity arose from tribal conflict. That rat and human tribes evolved cooperative tactics in the face of intra-group conflict, a group selection argument, has fallen out of favor with evolutionary biologists and is the basis for some of the criticism leveled at Lorenz. “The trouble with these books [the books of Lorenz and some other ethologists]”, Richard Dawkins fulminated in The Selfish Gene (1976), “is that their authors got it totally and utterly wrong because they misunderstood how evolution works”.
Humanity’s greatest paradox is that those gifts which we treasure above all others, our braininess and our capacity for speech, are the ones which may bring about our extinction. We have, says Lorenz been driven “out of the paradise in which [we] could follow [our] instincts with impunity.” Our evolutionarily derived capacity for culture confers on humans a facility for rapid change. What we gained with this capacity outstripped the limited injunctions we have against employing this capacity in those circumstances when we should not. Our aggravated competence in mayhem – aggression against others and destruction of the environment, is not sufficiently kept in check. A centerpiece of Lorenz’s claim, one that he repeats in several books, is that species which in the ordinary course of matters have a limited capacity to inflict damage on conspecifics have a correspondingly feeble inhibition against killing. When a dove is trapped with another dove it has no phylogenetically derived compunction against gouging its peaceful neighbor to death. So it is with humans and their rapidly evolving capacity for mischief. We are like a dove that “suddenly acquired the beak of a raven”. We don’t know how to turn the killer off, because we’ve never really had to before.
Lorenz may not have been to first to formulate the thesis that although we are certainly of nature, subject to the same evolutionary laws as other species, we are yet spat out of nature as a consequence of the forces of cultural flexibility. Paul Sears, the American ecologist, wrote in a similar vein in the late 1950s: “With the cultural devices of fire, clothing, shelter, and tools [Man] was able to do what no other organism could do without changing its original character. Cultural change was, for the first time, substituted for biological evolution as a means of adapting an organism to new habitats in a widening range that eventually came to include the whole earth.”
Now the human aptitude for carnage may have swollen beyond the easy reaches of our inhibitions but that does not mean that such moral inhibitions do not exist. Nor does it mean that we cannot amplify them. Balancing our aggression against others is our capacity for love and forbearance within the clan. What Lorenz has in mind is not to coolly rational morality of a Kantian categorical imperative. (Lorenz was, by the by, one of the inheritors of Kant’s professorial chair in University of Königsberg.) The love of which Lorenz speaks is a phylogenetically inherited moral regard for one another. The fate of humanity, Lorenz said, rests on whether this instinct can cope with “its growing burden.”
Manning the defensive walls alongside moral responsibility is our “phylogenetically programmed” love for custom. Institutionalized ritual and custom acts like a skeleton around which a culture develops. Specific rituals are passed from generation to generation. Of course, custom can be irrational and may misfire as it does in the case of “jeering at a fat boy” (Lorenz’s example). Grosser errors still can arise from customs associated with warrior culture, adaptive at one time but obsolete in present ecological and sociological times.
Lorenz cautioned against unconsidered elimination of cultural components, even in the case of “mild reciprocal head hunting” (apparently Margaret Mead’s term). This is because culture develops as an integrated whole. What assembles together sunders – so goes the theory. A possible source of cultural unraveling comes from the mixing of cultures. This was an argument that Lorenz had insisted up since the 1930s when he first pronounced it in a publication calculated to show a resonance between his work and National Socialism. At the time of receiving the Nobel Prize he apologized for his naivety, an apology that satisfied some colleague but certainly not all. The argument remained intact in On Aggression. But in addition to the temptations to deliberately remove unfortunate culture attributes, elements of culture were unraveling as Lorenz saw it under the influence of break in the traditional intergenerational transmission of information. He dates an especially major shift to about 1900. After this kids stopped listening to parents and teachers.
A detailed examination of the case of militant enthusiasm is the centerpiece of Lorenz’s anthropic shift. Enthusiasm, for short, is “a specialized form of communal aggression”, but this behavior interacts with culturally ritualized activities and thus may be controlled by rational insight. In other words, there is nothing we can do to ablate enthusiasm from our behavioral repertoire – the eye may still mist during the National Anthem but Olympiads disincline to jump each other. In fact, this is the nub of the matter: aggression is rooted so deep that it attaches to those things most dear to us. The conclusion from this is that man (Lorenz wrote at a time when “man” stood in unblushingly for all of humankind) was Janus-headed with an evolutionary endowed potential to commit to all sorts of noble things, but meanwhile will readily dispatch his brother for the sake of these same values.
Lorenz’s solutions to the problems of aggression, set out so elaborately in On Aggression, are disarmingly simple; banal, in fact, is his word for them. So simple that one senses that he worried that one might not, after all, have needed all that ethological labor to propose them. There are four solutions: Know thyself, ethologically; cathartically sublimate the aggressive (and libidinous) drives; promote international friendship; and most importantly channel of militant enthusiasm into just causes. En passant, he advises against mere suppression of instincts since aggression builds up hydraulically (an analogy in Lorenz that links him to Sigmund Freud); it cannot long be controlled. You may be glad to learn that eugenic planning is excluded as highly inadvisable. He is also enthusiastic about the role of humor in puncturing the pretensions of those who might lead us along false paths (“we do not as yet take humour seriously enough”).
In his roster of solutions international sport figures prominently as an opportunity to discharge aggressive instincts. The discharge of that particular form of aggression, militant enthusiasm, can be achieved by redeploying them to causes as diverse as civil rights, the prevention of war (though not, admittedly as appealing as war itself), and in the “three great enterprises” of art, science, and medicine.
Lorenz ended On Aggression on a note of optimism. “I believe”, he wrote, “that reason can and will exert a selection pressure in the right direction. I believe that this, in the not too distant future, will endow our descendents with the faculty of fulfilling the greatest and most beautiful of all commandments.”
In 1975 when E. O. Wilson published his groundbreaking and controversial book Sociobiology: The New Synthesis he predicted that ethology would simply be subsumed by sociobiology, behavioral ecology, neurophysiology, and psychology. In fact, by time the ethologists won their Nobel Prize in 1973 the phase of classical ethology was over. So many of the foundation concepts of Lorenz and Tinbergen had fallen into disuse that later in his life a note of exasperation crept into Lorenz’s writing. Thus the apparatus with which Lorenz reached his conclusion was considered largely unnecessary by the contemporary students of human behavior.
This does not mean that Lorenz was wrong. Few biologists might contradict a conclusion that aggression has an instinctive component and that an evolutionary understanding of aggression can contribute to solutions. Nor might many be averse to learning about the nature of war from rats. Nevertheless, extending ethology to humans with a confidence seen in Lorenz’s work might strike many as hubristic. Indeed, it is clear that Niko Tinbergen thought so, and he remained more modest in his claims. But at the end of the day all anthropic shifts may be hubristic, even if such claims are accompanied by that most charming cousin of hubris: unbounded optimism.
It may be apparent to some readers of this piece that there exists an extravagant parallel between Lorenz’s On Aggression and E O Wilson’s new book The Social Conquest of Earth (2012). Like many writers of the anthropic shift both have an expertise in “lower organisms” (Wilson famously is an ant guy); both invoke a group selection hypothesis to explain altruism and loyalty within human tribes; both think that the aggression that leads to war are our hereditary curse (Wilson) or evil (Lorenz); both think that the better and lesser aspects of our natures are at war with one another; both have invoked the wrath of Richard Dawkins in almost identical fashion; both have unbridled optimism about the future, if only we listen to them. This is not the place to explore these similarities though I encourage you to read both books, and if you care to join us in conversation about them (see here).
The anthropic shift, the compulsion to draw upon evolutionary insights from other organisms to bring to bear on the human condition, is solid it seems to me, and both Lorenz and Wilson have important things to say. Nevertheless, the zoological approach taken alone, without insights from humanistic disciplines, or from the social sciences that are committed directly to the study of humans, or from the arts, offers us quite little. After all global events since Lorenz wrote On Aggression suggest that his formula was either unheeded or unworkable on scale that matches the immensity of our problems. Wilson seems to acknowledge this, and makes enthusiastic noises about interdisciplinarity while also noting that pure philosophy has “abandoned the foundational questions about human existence. The responses from both within and beyond his academic discipline nevertheless seem aggressively hostile to his latest attempt to save humankind. Jousting never looked more lethal.
[Note: I was given a copy of On Aggression by my mother as a requested Christmas gift when I was 19. It has, therefore, taken me 30 years to write about it. At this rate I'll have a piece of writing on Infinite Jest in 2042].
 It’s been pointed out that doves do not in fact behavior as Lorenz repeatedly asserted that they do, that is, torture a neighbor to death when that unfortunate neighbor can not escape.
 Sears PB. 1957 The Ecology of Man. [Oregon State System of Higher Education, Condon Lectures.] Eugene, OR: University of Oregon Press.
Monday, May 28, 2012
When the Fruit Ripens Seed Scatters: Notes towards a History of Motility
Quum fructus maturus semina dispergat. Linnæus, Philosophia Botanica, 1751
1. In The Beginning Was the Verb
In the beginning was the Verb, and the Verb was with God, and the Verb set all things in motion. More than just any Word (Latin verbum, word) the God who is, was, and shall be a Verb commuted motion of an Absolute form to Relative Motion. In the universe created of the Verb everything moves; absolutes have no meaning.
And some things rose and other things fell. Those which rose remained in constant motion until impeded and of those which fell some acquired spontaneous motion. These self-moved movers, called motile, include some cells, spores, the quadrupeds, and the bipeds. The Philosopher studied the motile keenly, since the prime mover and all that had risen remained less accessible to knowledge. Since the self-moved require the unmoving for motion they must themselves be, he concluded, comprised of a series of both fixed and moving parts at the seat of which is an unmoved mover – the animal soul. In this way the motile mimic the first mover.
Living things move and they share this characteristic with every other thing; stasis, that is, there can only ever be relative stasis. Movement differs from motility in as much as the latter, in its most fully expressed form, is movement where a purpose that goads, a desire that compels, and a body that advances, converge.
2. Arise and Be Bipedal
Humans possess an unusual form of bipedality technically called walking. Walking emerged earlier than did a brain large enough to befuddle us regarding our destination or pensive enough to cogitate walking’s origins. It is the oldest of our peculiarities and the process and its origins remains fruitfully perplexing. As engineer Tad McGeer designer of passive walking machines wrote more than a couple of decades ago: “Today we can build machines to travel beyond the other planets, yet we do not really understand how we move about on our own two legs.” But there are no shortages of bright ideas about the phenomenon. Like other bipedalisms (that, for instance, of dinosaurs, birds, lizards, kangaroos, ostriches, and even cockroaches when one provokes them appropriately) walking merits examination from an energetics perspective. Energy spent on slower movement (compared to running, that is) is reimbursed by the energetics of pendular action: a leg swings out from the hips, followed by the succeeding leg as the first leg performs an inverted pendular motion from heel to toe. All accompanied by arm swinging. Sporting a jaunty hat remains a human innovation. Thus a series of fixed and moving parts propels the animal along with relatively little energy wasted. All bipeds are Aristotelian, though for the most part unwittingly so.
Of certain squabbles it can be said that they are productive without being settled; of others that they are unsettling without being productive. Questions concerning human origins remain both unsettled and unsettling. While considerations of energetic efficiency, especially over longer distances, point to a selective advantage for walking, nevertheless there is little agreement on what the most parsimonious explanation might be. Walking frees up the hands for foraging, for carrying the children, it provides the tropical sun with a diminished target and thus may be thermodynamically recommended and so forth.
Hominins have walked the earth for four million years or so. Four million years of ambulating with purpose. Since things did not come to us, we marched off to them. That is, human mobility, however it was achieved, and to whatever selective pressure it was a response, was always a walking to. Food goaded, human appetites compelled, and an erect body complied.
3. Let Them (foodstuffs) Come Onto Me
Though a person might well walk and chew gum and the same time, it’s unlikely that she will walk and write at the same time. Nietzsche’s aphorisms may be the closest we have to mobilography – writing born on the hoof. Writing may overcome space and time but it also, with consequences, impedes movement. History, therefore, is a report by the sedentary (Latin sedēre, to sit) written for the stationary. Not surprisingly academic disquisitions prioritize fixity over mobility. Even the lives of nomads have typically been characterized as fanning out from an immobile sacred center.
Sedentarism is a plant’s revenge. The late Peter Wilson, the New Zealand anthropologist, in his now classic account of the origins of architecture, The Domestication of the Human Species, pointed out that while we were busy domesticating plants and animals, they were reciprocating by domesticating us. We fumbled around with their edible reproductive parts; they conferred upon us their rootedness. So, permanent architectural structures and the Neolithic revolution coincide in their origins. Both the domestication of creatures and the setting up of a domicile called for a settling down – a cessation of movement that, though not absolute, was decisive. Agnostic though one might be about the progressive nature of the agricultural revolution, nonetheless the implications are such that civilization can be seen as a pimple on that revolution’s ample rump. On the basis of an agricultural productivity beyond the threshold of mere subsistence, the accoutrements of civilization emerged: a high degree of occupational specialization, writing, the growth of cities and so on. We traded mobility in the larger landscape for access to a larder. And even though our scholarly sensibilities may rail against so simple a dichotomy as nomadic versus sedentary lifestyles (and the correlates attendant to each), nonetheless one must resist being so refined as to reject a real discontinuity when we stumble across it.
Humans and their domesticated plants and animals have their place. In fact they make their place. Place, as the human geographers have told us, is space made personal. Proust’s madeleine – ten thousand years of post-agricultural history clarified and made delicious – conjured up an instance and a place, and not merely space-time co-ordinates (though it does that too). If the primordial ecology of our species was fashioned by traversing to things, the reversal involved in agriculture was that we are now bound to things in a place.
4. Though I Scattered Them Among the Nations
The sound of dehiscence is a barely-audible pop. It is the process by which anthers, follicles, some fruits, spherules, pods and other biological capsules explode and release their mature contents. Less gloriously, the term is also reserved for the rupturing of a surgical wound, either superficially or completely, releasing the infected flesh from the strain of the suture. Whether the Great Dehiscence of the human population during the Age of Discovery can be considered a triumph or calamity: the scattering of the matured human seed or a gangrenous discharge from an exploded wound will, I supposes, depend on one’s perspective.
In the view of prehistorian Grahame Clark a distinctive attribute of humans is that they perceive the spatial and temporal dimensions of their environment more consciously and decisively than other animals. In freeing ourselves of some of our more immediate telluric constraints we extend a conception of space over progressively larger territory. Thus, Henry the Navigator (1394–1460), a Portuguese prince, exemplifies the esprit of early modern exploration. His achievements were more cerebral than swashbuckling. He recruited Arab scholars, Jewish merchants and mariners from around Europe to create maps that collated the most precise geographical information of the age. He encouraged changes in on-board instrumentation for calculating latitude. His fame, therefore, in some circles is more for his cerebrations concerning space than for his acumen in personally navigating it. Although he accumulated great wealth from West Africa for the Portuguese, he himself never joined in on an expedition there.
Less perfervidly, however, one might rename the Age of Discovery as the Age of Invasion, Conquest, and Occupation. Evaluated from this perspective Prince Henry appears more savage than savant. For example, he commissioned the design of the caravel, a vessel better equipped than the more traditional barca for traversing the treacherous waters of the West African coast. It was, of course, a craft perfectly suited to the task of plunder. The Portuguese made it as far as Cabo Branco (now, Ras Nouadhibou, Mauritania) in 1441. Within two years of this they were shipping back slaves to Portugal, a task for which the caravel was coincidentally well equipped. This was a defining early moment in the modern Atlantic slave trade.
The dehiscence of early modern Europe is thus a threshold event in the history of human motility. On the basis of the stored energy from domesticated plants and animals, and the subsequent accumulation of cultural ingenuity, social stratification, and the attrition of resources and landscapes, the merchant countries of Europe were ready by the 15th Century to teem across the globe.
Humans overcome the fear of being touched when they form a crowd said Elias Canetti in Crowds and Power. An important moment in the genesis of a crowd comes when differences are discharged and all members are placed on an equal footing. But that happy moment is just an illusion – they are not equal. The thousand of years of human sedentary life was a lengthy gestation of the multitude, or a swarm. Now, in a bee swarm apparently the insects take off to a new nest site with only a few individuals knowing the location of the new site, yet these few individuals guide the swarm to their new home. So, it is with humans. The human swarm in the days of European exploration represented the migration of the many at the behest of the few. In this manner, contemporary migrations differ strikingly from the peregrination of early bipedal hominins.
5. Take up your Gadgets Daily…
Three themes of contemporary life are the compression of space and time and the miniaturization of the object. The agricultural revolution compressed space by bringing the necessities of lives to our door; while also, it must be said, creating the door. The age of exploration and exploitation (which I term the European dehiscence) compressed time (and space) by making of our globe a more easily traversable marketplace. Finally, Steve Jobs compressed the object making gadgets that can flit around the now tinier globe in our hip pockets. And when I say Steve Jobs here, I naturally mean to perch him on the shoulders of the giants of miniaturization.
The miniaturization of technology and the portability of objects is part of an evolutionary progression, according to Italian born architect Paolo Soleri, whereby complexity increased over time and which in turn, he thinks, should be linked to miniaturization. Arcology, Soleri’s name for his combination of architecture, urban planning, and ecology, is based upon the notion that large systems dissipate energy, but small ones conserve it. Arcosanti, the town being built (slowly, very slowly) according to Soleri’s designs will occupy only two percent of the footprint of conventional towns of comparable size.
Miniaturization thus has two dominant flavors. One is consistent with environmental concerns where we scale back some dimensions of the human enterprise. Since the global footprint of the 7 billion of us is now greater than the biocapacity of the globe (that is, we are living by drawing down natural capital). miniaturization is an ultimate objective of Soleri‘s designs. The other trend provisions us with portable devices. If the physical plant is the symbol of industrial times, the iPod is the fruit of these…let’s call them post-industrial times – both terms have pleasing references to vegetation, the plant rooted, the pod prepared to dehisce and disperse.
Though one might think that the nanofication of devices gets us back to some sort of ur-technology – the tune-packed iPod as equivalent to the chipped flint in the hands of a hunter – however the portable device is typically hiding its significant mass elsewhere (the entailments of production and waste). The conflicting trends in miniaturization can take us in two directions – the first is an environmentally motivated reduction that pulls us back within the limits of the planet, the second is a miniaturization that gets us off this planet. Interestingly though, Elon Musk, a co-founder of SpaceX whose craft, the Dragon, just docked with the International Space Station, stresses environmental concerns in touting multiplanetary life as a plan for guaranteeing human survival.
In his book The Invisible Pyramid (1970), written right after the first bipedal stepped onto the moon, Loren Eiseley contemplated the inner and outer space of humanity. In a chapter called The Spore Bearers he compares us to the fungus Pilobulus whose countless spores get hurtled away from the capsule in which the matured. Though the story of humans in space may not have progressed as rapidly as some in 1970 may have predicted it may yet be the case that our most unbridled motility is just ahead of us.
All things move, some things are motile; motile humans rose up and peregrinated across Pliocene savannas; a complicity with plants ended our peripatetic ways, plant and man settled down; the relatively vast populations of the Old World dehisced and pullulated across the globe; contemporary humans conferred mobility on things that they formerly left behind; the human enterprise marched to limits of the globe; some urge curtailment, while others watched optimistically as the SpaceX Dragon connected to the International Space Station
….and there shall be no night there; and they need no candle…
Photo Credit: The photograph of running legs is by Randall Honold. The editor generously donated the sperm. The idea for this piece came up during a conversation with my DePaul University Human Impacts on the Environment Class - those kids are the best!
Monday, February 06, 2012
The Human Peacock’s Ghastly Tail
“He was violent?”
She exhaled. “I don’t know. What’s ‘violent’ anymore? He was a teenage guy. Then, a guy in his twenties."
—Richard Powers, The Echo Maker
Once upon a time, there was an editor of a short-lived academic journal called Evolutiona Pathologica who was fired in disgrace. In an interview published after his dismissal, the editor, a notoriously fastidious man, reported that papers in his journal often had a pronounced impact on the field primarily because they were unsound; unsound in their conception, imperfect in their analysis, defective in their conclusions drawn from meager data, and inflated in the claims they made about their practical implications. The papers were often wide of the mark, he conceded, and even occasionally bonkers. Yet, many papers were masterpieces precisely because refuting the claims strengthened the subdiscipline of evolutionary pathology. Or so he said.
Recently, while archiving the material from the defunct journal, I reread the manuscript the publication of which resulted in the editor’s dismissal. I also discovered an internal report on the dismissal that shed light on the case..
Before reproducing the offending paper – some of you, of course, will remember it well – I’ll remind you of some of other mildly controversial pieces that appeared in the journal. For instance, in a rather famous special issue on the pathological origins and implications of bipedality, Professor J. P. X deRossa-Ellman made the celebrated claim that upright walking evolved to reduce the overstimulation of reflexology points on the hands and to intensify the quality of the massage on the feet. “As hominins shifted from an arboreal habitat,” deRossa-Ellman opined, “pressure on the hands, especially on the zones associated with the small intestines inclined Australopithecines to a frightful gassiness. In contrast, the laudatory effects of passively massaging the feet by walking on the dewy grasses of the East African savannah produced a sense of well-being that disposed our primitive forbears to recreational coitus. Those more upright proto-humans joyously copulated thus leading to increased fitness.” To the embarrassment of the journal it was later discovered that deRossa-Ellman ran a specialized massage parlor on the near North side called “Strange Beginnings/Happy Endings”. He also did a brisk business selling “genuine savannah grass”. Apparently you could also smoke the stuff.
In another issue on evolutionary patterns in the peoples of Ireland a rather tartly written article appeared where Dr. Quentin Yeatly-Bawn claimed that the evolution of the mesmerizingly large cranium of Irish men was an adaptation designed to distract the colonizing usurpers of that island nation from what an Irish man was doing to them with his hands. A response which ran under the title: Q Yeatly-Bawn is out of His Tiny Mind pointed out that though Irish foxes, stoats, and otters have large heads, Irish men were moderate in this respect at least.
These small skirmishes provoked a mildly negative response in comparison to the more controversial piece, the one that triggered the editor’s removal, which I reproduce in full, although it reads in a fragmentary way. In the archived box of material associated with the journal I found several of the reviewers’ comments on the piece; I also provide excerpts of these. The paper was published anonymously, which may have been part of the problem.
Evolutiona Pathologica 5: 12-17 NOTES AND OPINIONS
Scary Bastards and Sexy Wreckers: A Short Note on the Sexual Selection of Environmental Destructiveness
Evolution is a reality-based game where the score is tabulated exclusively by the number of extra lives a player accumulates. Evolution occurs not because organisms desire to play but because successful players sire those who incline to continue the game using the same rules of their progenitor. Technically this is captured by the term “fitness” in evolutionary biology – a measure of a genotype’s reproductive success (rated by surviving progeny) as compared to the survivorship of those of competing genotypes.
In addition to those characteristics of organisms that increase their ability to survive and reproduce, many organisms sport features and behaviors that may appear detrimental to their survival. The peacock’s tail is emblematic here – tail feathers so extravagantly developed that they largely confine the bird to the ground, increasing predation risk. To explain such seemingly paradoxical characteristics, as well as to explain the advantages that some individuals have in relation to reproduction, Charles Darwin proposed his theory of sexual selection. The theory can be helpfully applied in explaining a range of phenomena including pronounced showy plumage, sexual dimorphism, insect and bird song and so forth.
The mechanisms driving sexual selection include competition within sexes (intrasexual selection) and mate choice between the sexes (intersexual selection). Competition for access to mates is generally more prevalent in males, whereas choice of mates more prevalent in females. This is because of the differential costs involved in reproductive success in the sexes. Sperm is cheap and copious; ova and investment in child-rearing are expensive. Therefore solving the evolutionary arithmetic problems of enhancing fitness produces strategies that are pronouncedly different in males and females. As Darwin concluded: “the greater size, strength, courage, pugnacity, and even energy of man, in comparison with the same qualities in woman, were acquired during primeval times, and augmented, chiefly through the contests of rival males…” Complicating these quite simple distinctions between male and female reproductive strategies is the observation that males may increase their fitness by investing in childrearing, and women can increase their fitness by extra-pair matings with males of high quality. Mating strategy will vary with sex, with age, and even with stage in menstrual cycle. The way in which individuals “play” the game of enhancing their fitness continues to surprise those who study mating behavior, by which I mean it should surprise all of us.
In this note I propose that environmental destructiveness, the patterns of which have evaded the attention of evolutionary thinkers, is largely, but not exclusively, driven by intrasexual male contest competition for access to mating opportunities. When beard volume, voice timbre, penis size, and physical blows have insufficiently cowered the competition, then throwing a maladaptive spanner in the works of nature serves as an evolutionary escalation in the struggle for mating opportunities. Male contest competition leading to environmental despoliation may be strengthened by intersexual selection whereby women read environmental power as a signal for genetic quality. Thus environmental destruction escalated under the combined influence of competition between men, and mate selection by women. When male destructive behavior is fostered by the former process I label these men scary bastards. When fostered by the latter I label them sexy wreckers.
This hypothesis builds upon the following observations:
1. Men are more inclined to aggression and violence then women. This inclination is evolutionarily derived from contest competition for access to and monopolization of potential mates.
2. The ability to impose environmental destruction is correlated with other dominant male attributes and can similarly be interpreted as depriving rivals of mating opportunities. Unlike other expressions of male dominance the ability to inflict environmental violence may peak much later in life than other indicators. Environmental vandalism, like the accumulation of wealth, may be an old man’s game. As such it is as likely to be strongly influenced by female choice as well as by contest competition.
3. Women may not necessarily find environmental destructiveness attractive. As Darwin noted females may accept “not the male that is most attractive to her, but the one which is least distasteful.” Unlike some male attributes, like muscularity and “bad boy” indicators, which are valued in short-term partners, environmental destructiveness may however be valued in long-term relationships if it signifies power and status. Environmental despoilers tend to be married but are, presumably, frequently cuckolded.
4. Since destructiveness is more common than creativity in men, one can conjecture that in prehistoric times vandalism was a more successful strategy than artistic production.
5. Environmental destruction has increased in contemporary times when most other forms of male contest competition have been minimized. This suggests a remedy which I will discuss below.
The Sexual Selection of Greater Male Direct Aggression
Men are more aggressive than women in the categories of physical aggression, verbal aggression, and hostility. Women, apparently, are just as angry . When the assessment of aggression is extended to include so-called manipulative forms of aggression the differences between men and women become less apparent. That is, women are proficient at gossiping, spreading rumors and so forth, and this may be a more successful strategy for social exclusion when the cost of direct aggression is high . Since many of the more pronounced physical differences between men and women including greater male mandibular strength, greater muscle mass etc. relate to the ability to both inflict and absorb aggressive blows, it seems reasonable to conclude that for men the cost of escalating violence paid some evolutionary dividends, but for women it did not.
The differences between male and female levels of direct aggression as well as the relatively greater female fear of aggression are read as evidence for the sexual selection of male aggression . When this data on direct aggression is put alongside data on the greater male than female variance in reproductive success, the existence of several male display characteristics, both vocal and visual, the relatively greater mass and strength of men over women size the case for sexual selection as the explanatory process appears convincing .
Summary: Environmental destructiveness is a special category of aggression directed extra-somatically and depends not upon the ability to trade physical blows but rather upon the ability of males to extend contest to the broader environment. Men who can inflict the most reckless damage on their environment (a part of their inclusive phenotype) are scarier and thus intimidate less destructive men who then concede mating opportunities to these dominant males (= scary bastards).
The Problem of Older Men
Males of polygynous species (where males have multiple mates simultaneously) will typically avoid encounters with older males till they are sufficiently mature to physically compete. If humans can be regarded as having polygynous tendencies then young adulthood is a risky time for males – enough testosterone to dull the fear of violence, but insufficient physical strength to compete reliably with mature males . Older males are also at risk in physical encounters as they enter a physical decline when they are in danger of being disposed by younger competitors. Since peak physical condition is predictive of success in contest competition, one might expect most pair bonding between men and women in the full bloom of young adulthood would be the norm. A discrepancy in age of mates in monogamous pair bonds is typical though. The reason for the discrepancy is that wealth and status in males denotes a capacity to provision mates and offspring with resources and should be a selection criterion applied by females to potential mates . A fifteen year age difference is optimal .
Summary: Environmentally destructive tendencies provide a conspicuous metric of male wealth and power and therefore destructive men, (sexy wreckers), have clearly been driven by the sexual appetites of women.
Creation and Destruction
Suggestions that male creativity, quick wittedness, brain size, and intelligence result from female mating selectivity has been challenged on a number of grounds . Evidence for the mild heritability of intelligence and a correlation between intelligence and sperm quality are presented as evidence for this . The hypothesis that male braininess is sexually selected by female choice seems to be contradicted by the number of feeble minded men that appear to be successfully mated, and perhaps more glaringly by a lack of pronounced difference in male and female intelligence. From the perspective of defending the mating-mind hypothesis, women are frustratingly brainy.
Summary: In contrast to inconsistent evidence for the emergence of male creativity and humor as a result of female mate selection, the evidence, at first glance, is better that environmental destruction is sexually selected. Males are more directly environmentally destructive. Some of this may build upon traditional roles. For instance, hunting and managing lands to improve hunting opportunities imposed significant damage. I speculate that ethnographic evidence will support the view that men are more recreationally aggressive with the environment.
Conclusion and Remedy
Darwin noted that the difficulty in regard to sexual selection, “lies in understanding how it is that the males which conquer other males, or those which prove the most attractive to the females, leave a greater number of offspring to inherit their superiority than the beaten and less attractive males. Incontrovertibly, environmentally destructive tendencies like other male displays, for example, outsized penises, seem largely unnecessary, objectively unlovely, undeniably destructive, but for all of that, fearsome to other men and preferred by the ladies. That is, both are subject to both intra and intersexual selection.
In prehistoric times opportunities for environmental destructiveness beyond that necessary to meet basic needs was limited. In contemporary times environmental destruction can be conducted on planetary scales. This is clearly the result of a runaway selection and is exacerbated by legal curbs on male-male aggressive competition, other than in the athletic arena. Since environmental destruction is both expensive and risky, it both increases the quality of the fitness signal and exacerbates the risks that none of us will be around to enjoy the other pleasures that being a sexually reproducing species bring. The remedy is simple: we need to invite men to resolve their contest competition in lower risk situations (e.g. fight clubs) rather than at a global scale in war and destruction, and furthermore, request that women forgo the dubious pleasure of mating with men who are not committed to environmental sustainability.
1. Buss, A.H.; Perry, M., The aggression questionnaire. J. Pers. Soc. Psychol. 1992, 63, 452-459.
2. Archer, J.; Coyne, S.M., An integrated review of indirect, relational, and social aggression. Personality and Social Psychology Review 2005, 9, 212-230.
3. Archer, J., Does sexual selection explain human sex differences in aggression? Behav. Brain Sci. 2009, 32, 249-+.
4. Loeber, R.; Hay, D., Key issues in the development of aggression and violence from childhood to early adulthood. Annual Review of Psychology 1997, 48, 371-410.
5. Nettle, D.; Pollet, T.V., Natural selection on male wealth in humans. Am. Nat. 2008, 172, 658-666.
6. Helle, S.; Lummaa, V.; Jokela, J., Marrying women 15 years younger maximized men's evolutionary fitness in historical sami. Biol. Lett. 2008, 4, 75-77.
7. Miller, G.F., The mating mind: How sexual choice shaped the evolution of human nature. . Anchor: 2001; p 528.
8. Arden, R.; Gottfredson, L.S.; Miller, G.; Pierce, A., Intelligence and semen quality are positively correlated. Intelligence 2009, 37, 277-282.
The reviewers’ comments on the paper, with the exception of one laudatory set of remarks, were negative. “This author knows next to nothing about the field of sexual selection or environmental psychology. In addition to displaying a poor command of the literature, the writing is second rate, the development of the argument third rate, and the conclusions trivial.” “This contribution is made moot by the widely acknowledged demolition of the field by Professor Joan Roughgarden.” “Good luck with the review board getting approval to test any of these trite conjectures.” The one positive reviewer wrote: “A breakthrough…testable hypothesis…real solutions….” and so on.
Presumably it was this reviewer’s comment that the editor relied upon in making his final decision to publish the paper.
Within a week the journal received negative comments from a couple of dozen scientists who complained that the published note had no merit. The journal recorded that the editor had stepped down after an internal enquiry concluded that he had ignored the advice of most reviewers of the manuscript.
In addition to the material I have already reproduced, a report by the journal’s board on the dismissal case came to light in my investigations.
The inquiry revealed that Scary Bastards and Sexy Wreckers had, in fact, been written by editor himself. The laudatory review may also have been penned by his hand. When asked for comment, the editor stated that though the “all-male board” may question the ethics of his conduct, nonetheless his wife had simply loved the article. And that, he concluded, “is the name of the game.” In turn the board chose not to reveal the identity of the writer.
The editor, the board and all their progeny lived happily ever after.
The following review was very useful in preparing this tale: “Beauty and the beast: mechanisms of sexual selection in human” by David A Puts from Evolution and Human Behavior (2010) Volume: 31, Issue: 3, Publisher: Elsevier Inc., Pages: 157-175
Photo of Kaveri River by Randall Honold.
Monday, January 09, 2012
A Tiny Dying Such as This – Is There an Ongoing Mini Mass Extinction of Soil Invertebrates in the Midwest?
A short note in which I conjecture on a potentially vast local extinction event of Midwestern soil organisms especially of those inhabiting the leaf litter of woodlands.
In our evolutionary progression humans scrambled from the leafy treetops about half way down the length of the trunk. We now live perched between treetop and root ball on that convenient platform we call the soil. If physicists can give themselves vertiginous shivers by imagining those empty atomic spaces that constitute the seeming sturdiness of ordinary things then it is surprising that soil ecologists ever leave their homes knowing as they do how vastly crenulated, fissured, fractured and porous is the soil.
Ours is the exceptional ecological enterprise since more organisms live in the soil in those porous and interstitial lodgings than on the soil. We are not directly equipped for flight, we rarely burrow, we are condemned to walk upon the dirt until at last we may complete our descent into the ground, toppling into that large furrow excavated for our remains. A soil pore will have us after all.
If we had been just a little smaller and had migrated just a little further down the length of that primordial tree we’d be living in one of the most biologically diverse and ecologically active compartments of the biosphere. The upper ten centimeters or so of soil teems with living things. The organisms living in Earth’s thin and hyperactive rind are phylogenetically diverse, trophically heterogeneous, functionally assorted, highly variable in size, dissimilar in longevity, variegated in morphology, behaviorally divergent, adapted to different soil horizons, disparately pigmented, but are united in their reliance on death. Specifically, soil organisms are all similar in that they feed on detritus (i.e., dead organic matter). As I discussed in a recent column, collectively the action of these organisms within detrital-based food webs results in the breakdown of dead organic matter and the mineralization of organic compounds that makes key nutrient available to the living.
Examine your foot a moment. If it is like mine when shod it measures roughly 30 cm in length (yes, a foot) by about 9 cm wide (your foot, of course, may not be quite so rectangular!). A pair of feet such as these out for a stroll treads minimally upon the bodies as 270,000 protozoa, 135 mites, 3 springtails, and one or more large earthworms with each footfall. In places of high animal density the injury toll would be higher by several orders of magnitude. If you were sallying along a woodland path in the temperate zone these crushed critters will be representative of about 30 distinct and species of which up to half may be previously undescribed by taxonomists. Scaled up there can be as many as 200 species of soil insects and 1000 species of soil animals in total in every 1 m2 of soil.
These soil animals are drawn from many taxonomic groups: protozoa, nematodes, rotifers, tardigrades, springtails, mites, the preposterously adorable pseudoscorpions, insects from many orders, centipedes, millipedes, and on and on.
Conservationists need to pay more attention to soil organisms because they are a very large component of the biological diversity at many sites set aside for the conservation of species. They also play a role in the regulation of nutrient availability and this in turn exerts a large influence on a site’s biological diversity. So even if one was not as charmed by a soil mite as by, let’s say, a Northern Hairy-nosed Wombat (one of the rarest of our larger mammals), nevertheless, the functional significance of the soil mite should persuade you that it deserves a little of your attention. Soil critters are examples of what biodiversity guru E. O. Wilson once described as the “little things that run the world”.
In the last couple of years my lab has initiated investigations on the diversity of soil organisms and their significance in regional conservation efforts. We are addressing these questions in ongoing restoration projects designed to conserve biodiversity in and around Chicago (see the map below of the 100 1 hectare sites we are examining in collaboration with managers in 4 counties surrounding Chicago). These sites, in woodland, savanna and prairie habitats, are heir to the typical problems associated with open space in a major metropolitan setting – they are highly disturbed, heavily invaded, eutrophied, and fragmented. We are especially interested in learning how our current best conservation practices influence the composition of these below ground communities and, assuming such practices are altering these biotic communities, we want to know the influence these soil critters have on ecosystem processes.
Our studies are still in their early stages. One thing is clear to us though: there is a high probability that soil organisms are going locally extinct in woodlands around Chicago at rates faster that we can study them comprehensively. This may be true especially of those living in the litter layer of partially decomposed plant material.
If soil animals hummed as the ambled like Winnie to Pooh the sound of their productive murmurs would have noticeably dimmed in recent years. More silent that a bird-less spring is the silence of a habitat from which inconspicuous creatures have imperceptibly slipped away.
This vast dying of tiny things in Midwestern woodlands is a conjecture at this point. We simply do not have enough information on the issue to state it definitively. But the conjecture is nonetheless backed up with some evidence. I review a few relevant points to clarify what is a stake and what the major threats to Midwestern soil biota might be.
Temperate zone soil biodiversity is “The Poor Man’s Tropical Rainforest”
We know very little about soil organismal diversity in the Midwestern United States. Taxonomic experts admit, for instance, that only a fraction of soil arthropods have been described. For mites it may be as few as 5% of all species globally, less than 50% in the temperate zone. Most groups of organisms increase in diversity from poles to tropics where life flourishes best. To put it as did Jim White from National University of Ireland to us biology students in the 1980s: life is a tropical affair. We do not know much about these so-called latitudinal gradients of soil animals though the evidence is coming in that for many groups, species diversity peaks in the temperate zone (for example, the diversity of soil mites and free-living nematodes appear to peak at mid-latitudes). The density of many soil critters also peaks in temperate regions. For this reason the community of soil organisms in the temperate zone has been referred to by Michael Usher as the “poor man’s tropical rainforest”. The significance of this is that conservationists working in the mid-latitudes have a special responsibility for the conservation of these species. In Chicago where extensive tracts of openspace are set aside for conservation and restoration purposes we need to be confident that our conservation management is protecting cryptic biota below-ground.
Dominant invasive plant species and the creation of Interspersed Denuded Zones (IDZs) in the Forest Preserves
There are many stressors in Midwestern environments that may have a negative impact on the diversity of soil biota. These include fragmentation of habitat, anthropogenic nitrogen deposition from the atmosphere, elevated heavy metal concentration in soils, and altered soil hydrology to name a few. In particular I have been interested in one aspect of change in the woodlands of the Chicago region: Many of the dominant invasive species in lands of conservation concern close to the city can have very high decomposition rates and this, for readily understandable reasons, can have a disproportionate influence of species loss. For example European buckthorn (Rhamnus cathartica), a rarity in its native range, has become the dominant woody plant in Chicago’s Forest Preserves. The leaf of this handsome shrub is easily decomposed and unlike many of the native species that it replaces this litter is fully decomposed before it is replenished in autumn. As a consequence a series of interspersed denuded zones (IDZ) open up intermittently in woodlands. From the perspective of litter dwelling arthropods perspective this is like the mass clearing of a housing project. Leaf litter provides habitat for a vast diversity of species. In addition, the litter modulates the physical conditions of the upper layers of the soil which also harbors a large diversity of organisms. Several years ago undergraduate researcher Brad Bernau examined the abundance of diversity of soil microarthropods (mites and springtails) in standardized samples of litter (255 cm2 grabs) in several woodlands and found that diversity and abundance was lower in IDZs and moreover diversity stayed low even after litter was replenished. Bernau’s study needs to be conducted on a much grander scale to assess this phenomenon. In recent years PhD candidate Basil Iannone (University of Illinois, Chicago) has been developing the most comprehensive observational database yet on buckthorn and although he is not looking at soil arthropods, his work will give us unprecedented insight into the implications of this species on the environment of woodlands in our region.
Invasive earthworms accelerate breakdown of woodland floor
In addition to changes in the dynamics of the woodland floor as a consequence of shrubby invasion, these woodlands are also invaded by non-native earthworms. Worms are titans in the kingdom of decay and they contribute to the breakdown of the woodland floor and to the creation of denuded zones. The significance of worm-work is accented when one recalls that the ecological systems of the midwest developed in the absence of these animals.
Loss of litter dwelling species
Putting this together we can say that conservationists in the US Midwest have a global responsibility for protecting the diversity of soil animals whose numbers peak in the temperate zone. Areas set aside for protecting nature need to designed and managed in ways that achieve this aim alongside other priority species and processes. Although the evidence that the vast diversity of Midwestern soil critters is undergoing a mini local extinction event is indirect, it is enough to warrant serious investigation.
A thought that haunts me: In the 1990s I worked on the diversity of soil arthropods in Costa Rica, Puerto Rico, Hawaii and in the Southern Appalachians. The leaf litter at Coweeta Hydrologic Laboratory in North Carolina was thick and was home to an almost unimaginably vast diversity – larger than at the tropical sites. Small samples of the litter in a single 100 m2 patch of forest floor at Coweeta yielded well over a hundred species of soil mites alone. In contrast graduate student Claire Gilmore from DePaul recently surveyed mites at 11 sites throughout the Chicago region as part of our 100 Sites project found about half that number. Though the studies are not directly comparable they should give us pause.
Humans migrated from ancient canopies, a habitat of unparalleled species diversity, to the soil surface. Now below our feet is what Belgian taxonomist Henri André called the “other last biotic frontier”. Assemblages of soil arthropods are exceptionally diverse, functionally significant and vastly understudied. For those of us who see the challenge of biodiversity conservation as saving all the pieces, our challenge has become a little muddier than before. Our new motto: Ad terram – to the soil!
Thanks to Vassia Pavlogianis who collaborated in coining the term Interspersed Denuded Zones. Funding for some of work on soil biodiversity comes from The Gaylord and Dororthy Donnelley Foudation, Chicago.
Photo Credit: Soil microarthropods (from http://www.fao.org/ag/agl/agll/soilbiod/soilbtxt.stm), Interspersed Denuded Zone under buckthorn, Locations from our 100 Sites for 100 years project (manager Lauren Umek, Alex Ulp GIS assistant).
Monday, December 12, 2011
In The Kingdom of Decay: How a Motley Team of Subterranean Dwellers Ransacks the Dead and Liberates Nutrients for the Living
The recently dead rot much like money accumulates in banks (until recently, at least), only, of course, in reverse. A sage great-great-ancestor who had, for instance, set aside a few shillings for a distant descendant would, through the plausible alchemy of compound interest, have made that great-great-offspring a wealthy person indeed. In contrast, after death a body-heft of matter accumulated over the course of a lifetime is hustled away, rapidly at first, but leaving increasingly minute scraps of the carcass to linger on nature’s banquet table. It is as if Zeno had not shot an arrow but instead had ghoulishly slobbered down upon the departed, progressively diminishing the cadavers but never quite finishing his noisome meal. The soils of the world contain in tiny form, scraps of formerly living things going back many thousands of years. Perhaps these are the ghosts we sense when we are alone in the woods.
Before you rake away the final leaves of the autumn season, hold one up to the early winter light. Those patches where you see sky rather than leaf are the parts that had been consumed live, nibbled away by insects or occasionally browsed by mammals. But you may have to pick up several leaves to see any consumption at all! The eating of live plant material is rarer than one might suspect. It is almost as if most creatures, unlike us of course, have the decency to wait for other beings to die before they consume them. Ecologists have wondered why this is the case, asking in one formulation of the problem “why is the world green?” At the peak of the summer season the world is mysteriously like a large bowl of uneaten salad. The world it turns out is green for many reasons but a compelling one is that plants generally defend themselves quite resourcefully. The thorn upon the rose provides more than a pretty metaphor – this shrub knows exactly what to do with its aggressive pricks. And if one can neither run nor hide nor protrude a thorn, you might manufacture chemical weapons. Crush a cherry laurel leaf in your hand, wait a moment or so, and then inhale that aroma like toasted almond. It’s hydrogen cyanide, of course. “Don’t fuck with me” is one of the shrubbery’s less lovely messages.
Gravity tugs upon the dead. Those things not already in the soil when death arrests them tend soilwards upon their demise. If this were a world where the dead remained unconsumed an unwholesome detrital pile would have accumulated upon the bottom of ancient seas until the world’s usable matter had been exhausted and life on earth would have faltered. The dead must be moved along for the living to keep moving at all. Why this must be so is pretty obvious but precisely how post-mortem remains get disarticulated and converted into forms usable for the living is still being investigated. Professionally, I am a student of death and decay, which is an accurate way of saying that I am a student of life. The world is as brown as it is green.
From this point on I will primarily consider the decay of plant material since this comprises the bulk of terrestrial biomass. Concentrating on the breakdown of leaves rather than bodies makes the story less gruesome but the processes are much the same. The consumption of the formerly living and the transmutation of organic into inorganic constituents is the ecological business of a diverse community of saprophytic organisms (etymologically derived from sapro = putrid, and phyte = plant) and of an accompanying host of small animals that feed directly upon the decay or that nibble on the saprophytic microbes involved in decomposition. The outcome of all this caliginous toil is the liberation of carbon, nitrogen, phosphorus, and elements otherwise trapped in the death’s charmless chambers. The carbon burbles through the soil and back into the atmosphere, the nutrients spill into the soil and are scrambled over by microbes and plants all obeying life’s blind will to amplify.
Earthworms, millipedes, woodlice and so forth fragment dead leaves, breaking them into smaller pieces and exposing fresh surfaces to colonization by microorganisms. Earthworms, like mobile and mucousy tubes of toothpaste open on both ends, squirt their way through the world’s putrefaction. What they squeeze out may not be minty fresh but it has its own charisma. An earthworm’s body surface, its internal workings, and its copious soil-full egesta glisten with a snotty discharge that microbes simply die for. Or rather live for since these easily degraded substances prime the decomposer microbes whose micro-feeding frenzy continues the assault on dead organic matter. Earthworms inside and out are maestros of putrescence. In their poetic moments earthwormologists (a freshly coined term) have referred to their beast of interest as “Prince Charming”, its mucus as a “Kiss”, and those microbes that get whipped up into a digestive frenzy as “sleeping beauties”.
Fungi and bacteria are royalty in the kingdom of decay. They satisfy their nutritional needs by regally exuding extracellular enzymes upon their putrescent foodstuff and absorbing the rot. The soil is a trickle down economy of the most literal form. The bulk of global decomposition is performed in this macerating way. A bacterium, from the perspective of putrescence, is a single-celled sack of carnage constrained within a robust peptidoglycan wall. Not only can they break down some extraordinarily robust materials (including cement) some produce powerful fungicides and thus dispatch and then consume the competition. If it was not for one small design limitation this world of ours would host little other than bacteria consuming bacteria. A scientific madman indeed would be he who genetically engineered tiny legs for bacteria. For this is their structural drawback – bacteria are relatively immobile, and like sea anemones or corals they wait for their food to come to them or for some biddable creature to transport them to their comestibles. For this reason a majority of bacteria cells in the soil are physiologically inactive, waiting, waiting, waiting for some moist dead thing to enliven them and unleash a digestive maelstrom.
One should not be deceived by the daintiness of an intermittently protruding mushroom or toadstool. These are merely wardrobe malfunctions in the great show of mouldering – unseemly exposed tips of a grand underground organism whose digestively capable filaments (called hyphae) can extend as a network over many miles… yes, miles. Fungi, in fact, are celebrated among the world’s largest organisms. The strategy is that the organism can glean a portion of the nutritional requirement in one place and other portions elsewhere and in theory can distribute the ambrosial broth across the entire cytoplasmic web. Their sheer size has led to debate about what precisely constitutes an individual organism (genetic identity is clearly not enough) but for our purposes the significant point is that more or less everywhere below us a fungus toils, relieving the dead of the elements they have little use for anymore.
The community of soil animals supported by decay is profligately diverse – enigmatically diverse in fact since many occupy themselves with the consumption of similar morsels. The application of one of ecology’s few implacable laws, competitive exclusion, should dictate that this richness be diminished. There are predators down there of course – monstrous feeders, some of which are sheathed in chitin and furnished with pincers beyond the extravagance of ordinary phantasms. On predators’ menus: nematodes, protozoa, rotifers, mites, springtails, diplurans, termites, woodlice, and amphipods. All with their distinct gustatory charms one supposes; no-one is sharing recipes. The cupboards of non-predatory soil animals are rarely bare and you’d not go hungry down there as long as your appetite is whetted for fungus or bacteria for all your days. And this is the enigma of soil diversity therefore: so many animals live on the same diet with little specialization of feeding habits. How can this be so?
Energetically, soil animals, other than worms, directly contribute little to the decay of the dead. Functionally, however, they are tremendously important. The problem with the unrefracted dead, as you will recall, is that the dead harbor essential matter required by the living; the problem with microbes is that as quickly as they liberate these essential ingredients they immobilize them again in their own burgeoning biomass. Soil animals disrupt and facilitate in equal measure. They help things along by champing down upon microbes liberating their nourishing juices in a form available to plants. Now, one may wonder why consumption by the animals doesn’t simply lead to their accumulation in the biomass of those microbivores. If this were the case it might make it difficult for plants to get the elements necessary for their growth – all in all an unfortunate thing since it is primarily dead plant material keeping the whole thing going. Here’s what happens then. The composition of microbial cytoplasm is different from that of soil animals in one important respect. There is more nitrogen relative to the concentrations of carbon in microorganisms. Animals feed upon microbes to get at get their carbon fix and in doing so take in more nitrogen that they can process. To deal with this animals excrete that excess. The bottom line: the piss of armies of small animals sustains this green earth. Nitrogen gets into soils in other ways, of course, and soil critters perform other functions, but it is hard to overestimate the influence of tiny soil animals – mites and springtails (primitive wingless insect-like critters) – in orchestrating rot.
The nitrogen and all the other essential soil nutrients liberated during the decomposition of the dead ensures that plants can respond to the sun’s energy and live for a while, to sustain the living of others, such as us, for a while, to animate matter for a while, and all that while preparing matter for its lengthy sojourn in the kingdom of decay.
In its broad strokes the story of decay has been known for some time. Darwin famously contributed to that understanding. His book The Formation of Vegetable Mould through the Action of Worms, with Observations on their Habits (1881) culminated a lifelong interest in worms. Nothing escaped his attention: the density of worms in soil, their taste preferences, and even their unusual sexual habits (their “passion” he said, “is strong enough to overcome for a time their dread of light.”) In particular though he meticulously quantified the rate at which worms convert leaves into soil, thereby increasing the fertility of the soil. In the intervening century and a third the details have been worked out. The critical role of tiny soil animals in determining the rates of decay and in liberating soil nutrients emerged from the work of the last generation of researchers. I have contributed in a very modest way to this research literature in the last couple of decades.
Big questions remain unanswered. What might the significance be of the loss of below-ground diversity for the functioning of ecosystems? Can soil communities be restored if they are damaged? Can individual plant species manipulate soil decomposers to ensure a rate of decay that favors their own growth? What are the implications of global change for decomposition? If decomposition rates increase in bogs or in the tundra as they are expected to in most models of climate change, will the additional carbon released into the atmosphere in turn exacerbate global temperature increases (some folks speculate that soil carbon release will contribute to the breaching of a critical transition).
Perhaps it is just “cowards who die many times before their deaths”, but the matter that constitutes each and every one of us has experienced death so often that we should all be able to face our end languidly. We are all shuffling along the waiting line into the Kingdom of Decay. The workings of the upper five centimeters of the Earth’s surface may repay the considerable effort it takes to learn about it. The payoff may be felt not only in contemplating our collective environmental future but in contemplating our personal demise.
All photos by Liam Heneghan except photo of soil mite (Oppiella nova) by Claire Gilmore and Liam Heneghan.
Monday, October 31, 2011
Airplanes, Asparagus, and Mirrors, Oh My!
by Meghan D. Rosen
Last month, I asked you to submit a science-y question that you'd like to have answered in simple terms. You asked about light, and mirrors, and spices and space— I was delighted by the scope of the questions posed.
This month my fellow SciCom classmates tackled three. Steve Tung glides through the mechanics of flight; Beth Mole spouts off about asparagus pee; and Tanya Lewis reflects on mirrors.
If you have more burning science questions, just post them in the comments. We'll be back next month with more answers.
And if you don't have a science question, but do have a thought or a picture to share, check out www.sharingamomentofscience.tumblr.com
How can an airplane fly upside down?
Daredevil pilots execute stunning aerobatic maneuvers― loops, rolls, spins, and more― sometimes while upside down for a long time. How do they do it? It might seem that the force keeping a right-side-up plane aloft would push a flipped plane down.
The trick is how the plane is angled in the air. Pilots can adjust the tilt to lift the plane, even when it is upside down.
You may have stuck your hand outside of a moving car and felt the rushing air push it up or down. Tilt your hand more, and that force is stronger. Turn your hand upside down and it still happens, though it might not be as powerful.
Plane wings, flipped or not, work the same way― tilt them up more, and air lifts the plane more. There are drawbacks and limitations, however. Higher angles cause more drag, slowing the plane. Tilt too far and the plane loses its aerodynamic properties and falls like a rock.
But not all airplanes can fly upside down. Some depend on gravity to fuel the engines; some would break under the different stresses of flying inverted. Stunt airplanes use specially designed wings, bodies, and engines to be more agile, more durable, and more versatile.
Steve Tung once dreamed of designing airplanes and rockets. He now dreams of pithy, memorable prose. (He received a bachelor's degree in mechanical engineering with a concentration in fluid mechanics from Cornell University) Twitter: @SteveTungWrites
Many years ago Mel Brooks asked the one question which had haunted him all these years: "Why, after I eat a few stalks of asparagus, does my pee pee smell so funny?"
It wasn’t until recently that scientists started to unravel this odorous riddle. The answer lies with both the whizzer and the whiffer.
When we digest asparagus, its sulfur-containing compounds can break down into stinky subunits that strike as early as 15 minutes after eating. Although the culprit behind the smelly bathroom visits hasn’t been caught, the most likely suspect is methanethiol.
But in bathroom exit surveys, only some asparagus eaters say they can smell the excreted evidence.
In 2010, scientists went digging through a database that linked genetic data with survey data including answers to questions like ‘Have you ever noticed that your pee smells funny after you eat asparagus?’
They found that people who have particular DNA changes around a set of genes responsible for olfactory receptors—molecular smell detectors in your nose—are more likely to be able to smell asparagus pee.
So for those that can’t smell asparagus pee, it might not mean that you can’t make it.
Last year a different set of scientists waved pee vials under people’s snouts to sniff out who could make asparagus pee and who could smell it.
They confirmed that some schnozzles can’t smell asparagus evidence. But they also found that some people don’t seem to make it either, at least not in detectable amounts.
Since scientists haven’t pinned down the stinky subunit responsible, they can’t say for certain if it’s not there at all or just at really low levels that we can’t smell.
For now, it seems likely that our abilities to make and smell asparagus pee probably exist on sliding scales, and whether or not you can smell it seems unrelated to whether or not you can make it—so, continue to ponder in the potty.
Beth Mole earned her PhD in microbiology at UNC Chapel Hill studying a potato pathogen and did postdoctoral research on antibiotic resistant bugs at UNC's Eshelman School of Pharmacy. She started writing about science in 2008 for Endeavors magazine and is currently enrolled in the science communication program at UC Santa Cruz.
When you look in the mirror and point your right arm out to the side, your reflection in the mirror points its left arm. But when you point up above your head, your reflection doesn’t point to its feet. Even if you lie on your side and point your arm out, the mirror seems to “know” to switch which arm your reflection points, even though that’s now up or down relative to the ground.
What’s going on? Actually, mirrors don’t reverse things left-and-right, they reverse them in-and-out. Imagine casting a rubber mold of yourself, then turning the mold inside-out. Your reflection would face you, but your arms would appear to switch sides.
Another way to think about it is this: write something on a piece of semi-transparent paper and hold it up to the mirror. The reflected writing is, of course, a mirror image. But now turn the paper around so the writing faces you, and look at the reflection in the mirror. The writing is the right way round again. The reflection is like a stamp, making a “light print” of the writing on the page.
Tanya is a graduate student in the science communication program at UC Santa Cruz. She is an incurable science geek with a penchant for storytelling. She can be reached at tanlewis (at) gmail (dot) com or on twitter @tanyalewis314
Monday, October 10, 2011
The Quintessential North American Reptile
Article and photos by Wayne Ferrier
I had that unmistakable feeling of being watched. It was a sunny autumn afternoon, and I was helping my father dig up an old drainage ditch at their Central Pennsylvania home. I was pretty far down in the ditch, pitching gravel over my shoulder onto the bank above me. I paused and looked around.
It didn’t take long to find out who was spying on me. A common garter snake, Thamnophis sirtalis, lay curled up on the bank, watching me with an intensity that I would have to say bordered on fascination.
A curious thing about the encounter was that the snake was half buried in gravel. She was too enchanted watching me work to worry much about being buried in stones.
No doubt I was excavating a favorite hunting ground. Digging up and replacing the old drainage system, I was uncovering a lot of salamanders (Eurycea bislineata), most certainly a staple in this particular garter snake’s diet.
I do not know how long she had been there, inches from my head. For a moment we remained motionless, eyeing one another, but eventually she lost her nerve and darted off towards the stone wall. Slick yellow and brown lateral stripes proved to be excellent camouflage gliding through a background of burnt grass and autumn leaves, and she quickly disappeared from view.
I put down my shovel and took a coffee break. This encounter sure brought back memories. There have always been garter snakes in my parent’s yard, and as preschoolers we used to play here at this very same wall. My friends and I would often get that eerie feeling that someone was watching us. Often it turned out to be one particularly curious garter, and in our ignorance we would chase him back to the stone wall screaming, wailing, and hurling rocks. But within an hour he would be back to continue his espionage, we’d get that feeling and in panic try to unload more punishment on the poor creature.
Actually he was too fast for us and we never did catch him. What makes garters act like that? Most snakes go out of their way to avoid people, so one that actually chooses to live amongst us, hangout during daylight hours, and even engage in people watching seems to be rather oddball behavior for a snake.
A partial explanation is that garter snakes rely heavily on their vision when they hunt. This does not mean, however, that their vision is that great—a garter snake cannot see very well unless what it is looking at is moving. If the prey stays perfectly still, the snake may not detect it. The slightest movement, however, can give the prey away. Garter snakes are hypersensitive to the very slightest twitch. So things that move fascinate them.
In addition, it seems that garter snakes like to eat all the time, at least compared to many snakes that spend long periods between meals, and the thought of food sometimes overpowers their flight mechanism. I once caught and temporarily kept a fully mature garter at my country home in upstate New York and had her feeding out of my hand within twenty minutes. I used to play a game with her, where I’d wiggle my finger as if it were a worm and she’d get all excited, flick her tongue and start pursuit; that is until she figured out that my finger was attached to me and that ended the game; and she wouldn’t fall for that trick anymore.
She had no interest in leaving captivity until the autumn leaves started falling, and she knew she had to go hibernate. That’s when it was her time to put one over on me. In order to feed her I had to open the lid at the top of her terrarium. If she wanted fed, which was like several times a day, she’d rise, balancing her belly on the glass and propping up on the tip of her tail. I’d hoover an earthworm just above her head. She’d check it out for a few minutes, and then quickly snatch it from my hand. One October day she took a particularly long time making the strike and I was getting bored, eventually my attention started to drift; she took the opportunity and darted past me and out of the terrarium! Was that planned? Sure seemed like it, but I’ll leave that to evolutionary psychology to debate. I will say this though, she tried the trick several more times, but when I wouldn’t fall for it again, the ruse ended.
Garter snakes seem to be smart. Most snakes detect prey primarily by olfaction using their Jacobson’s organs. Pit vipers, such as rattlesnakes and copperheads, also have heat sensors. Compared to these snakes, garters are more visual. If movement is sensed overhead (e.g., a hawk), it is to be avoided; but if the movement is perceived at or below eye level (e.g., a frog), it may be pursued, analyzed, and perhaps eaten—unless it’s my finger. My pet snake once accidently bit my finger during a sloppy strike aimed at a worm. She knew immediately she had missed and hit the wrong target. I may be over anthropomorphizing a bit, but she actually appeared to be embarrassed, genuinely sorry, and ran and hid. When she saw I wasn’t the least perturbed by the mishap, she came back and finished the worm. She never missed after that and I was never bitten again.
Back to the story of digging the ditch and the garter at the stone wall: I’m sure watching me work, that it didn’t take her long to figure out that I was way too big to eat. Down in the ditch below her, I wasn’t particularly threatening, but when I noticed her and stood straight up, it was a different matter, and that was when she decided that the show was over, and maybe the best thing to do was skedaddle.
Hands down the garter snake is the dominant reptile in North America. It has the widest range and is the most common reptile found on the continent. The genus Thamnophis (garter and ribbon snakes) can be found anywhere from southern Alaska to the Maritime Provinces. The common garter, Thamnophis sirtalis, has the most northerly range of all North American reptiles, going as far as the border between Alberta and the Northwest Territories. Every one of the lower 48 states has at least one species of Thamnophis, and a few species live as far south as Costa Rica. Mountains, plains, deserts, swamps, even cities—it doesn’t matter—as long as there is suitable food around, garter snakes can usually be found.
Ideal habitats can accommodate as many as 10 snakes per acre, and several species can coexist in the same area by hunting different prey and being active at different temperatures. Add to this their habits of frequent feeding and daytime activity, and it is not surprising that a garter snake is the first—and perhaps only—snake that many North Americans may ever encounter in the wild.
Most Thamnophis are opportunistic, varying prey according to what is available. Earthworms are their favorite food, amphibians are their second choice. Sometimes they eat smaller snakes, and may even resort to cannibalism. Insects are consumed when abundant in the fall. Occasionally Thamnophis kill and eat rodents (e.g., voles, mice, chipmunks) or nestling birds. Their versatile diet may also include fish and crustaceans, and even carrion.
Perhaps the most extreme example of the garter snake’s love for food is their taste for dangerous delicacies. In some high-end sushi restaurants you may order fugu, a dish prepared from the extremely poisonous pufferfish or blowfish. A skilled chef knows how to prepare the dish by removing the poison; and if you buy it you must have a lot of trust in the skill and integrity of the chef. If a garter snake were to be fed a bad batch of fugu, it might not notice. The snakes have evolved resistance to blowfish poison (tetrodotoxin), because they regularly eat rough-skinned newts (Taricha granulosa), which also secrete the toxin. The newts and snakes have been engaged in an evolutionary arms race, and the last time I checked, it seems that the newts are losing. They just can’t make themselves toxic enough to dissuade the garters from eating them.
Generally, Thamnophis capture prey with their mouths and swallow it alive to slowly suffocate in the snake’s digestive tract. It is thought that the saliva of some species of Thamnophis may be medley toxic. Some garters may resort to constriction when subduing rodents—western plains garters have been seen doing this.
When a garter snake first ventures out to hunt, like any other snake, it flicks its forked tongue trying to locate prey. A young snake analyzes the scent substances give off by potential prey before it will strike, but an experienced snake relies more on visual clues. This skill is especially useful when hunting frogs and toads. Normally diurnal, many species prowl at night during the anuran breeding season. At this time the smell of frogs may be ubiquitous and relying on scent alone would not be productive for the snake. So it lies in wait and when it sees a frog or toad move it strikes. A snake may be right on top of a frog, but as long as the frog remains motionless, the frog will go undetected. Now the frogs are well aware of this phenomena—they themselves can’t see their own prey very well unless it is moving—so when the snake is around they keep still. Western ribbon snakes (Thamnophis proximus) have been seen solving this problem by systematically striking the vegetation, obviously smelling the frogs but unable to see them. Striking the grass disturbs the frogs to the point that they lose their nerve and make a break for it. The snakes were actually flushing the frogs out.
Many Thamnophis species are as comfortable in the water as they are on land. Sometimes they maneuver through the shoreline brush or climb trees and overlook the water from that vantage point. Ribbon snakes are frequently found among the reeds and cattails in shallow water—an ideal ecological niche, where land, water, and sky come together. The reeds offer good cover and usually abound with insects, fish, frogs, snails, and leeches. But the reeds also offer death. Those who like to eat aquatic serpents also hang out in the reeds; wading birds, predatory fish, and ophiophagous snakes (cottonmouths for example), are among the ribbon snakes worst enemies. When a ribbon snake comes across the scent of a predatory snake, it leaves the area immediately. If pursued, ribbons will sometimes dive into the water and submerge.
Many Thamnophis are generalists making use of a variety of habitats and prey.
Generalists are more versatile and less susceptible to starvation if one food source is scarce and this is a primary reason for the success of this reptile. But they have numerous enemies—mostly other snakes, large birds, and mammals such as opossum, fox, mink, and skunk. Young snakes may also have to contend with large toads and frogs. When faced with danger, the snake either tries to flee or conceal itself. Which one may depend partly on the weather. On chilly days a garter snake cannot move very fast and may not even attempt to flee, knowing full well that it would probably lose the chase. On warmer days a garter may even become aggressive but may quickly switch to passive measures if touched. Thus aggressiveness may be only a bluff—but don’t take this for granted! Sometimes if handled they emit a strong musky odor, which makes you want to put them down.
In the autumn and early spring you often encounter a garter snake en route to and from its winter hibernacula. In some areas this may be as far as 15km from where you find them. They often find other snakes to hibernate with. I’ve learned a lot about snakes since my original experiences with garter snakes when I was a kid. My autumn encounter with the spy might have been a strange experience had it been any other kind of snake, but it was a Thamnophis—the quintessential North American reptile. It was not so strange, there are a lot of them here. Perhaps the old stone wall has been a Thamnophis hibernaculum since I was a kid. I finished my coffee and my reminiscing and went back to work in the ditch and awaited her return. But not this time, the last I saw of her had been her stripes blending into the leaves and grass, as she slipped into the recesses of the stonewall.
Monday, October 03, 2011
Ask a Scientist
by Meghan D. Rosen
Each year, the Science Communication program at the University of California, Santa Cruz accepts 10 students and, for nine writing-intensive months, teaches them how to become better science journalists. This year, I am happy to say that I am one of the 10. My nine fellow classmates come from a wide variety of scientific backgrounds (from marine biology to mechanical engineering to neuroscience). We have a self-proclaimed ‘fish guts scientist,’ a potato pathologist, a reality TV star with survival skills (from the Discovery Channel’s, ‘The Colony’), a raptor surveyor (aka ‘hawk lady’), and an agricultural writer who grew up on a dairy farm.
It’s a diverse bunch of people, with a broad set of experiences, and the best part is: they all like to talk about science. I think I’m in heaven.
One of our recent assignments was to answer a classmate’s question that was about (or loosely connected to) our field of study. The constraints: we couldn’t use any jargon in the answer, it had to be clear to a non-scientist, and we had to do it in 200 words or less. Here are some of the question ideas we kicked around: Why does a golf ball have dimples? How does a submarine judge depth? Why do tarantulas migrate? How does the brain form memories?
I liked the challenge – answer a could-be complicated question with clarity–, and the idea of directly connecting scientists with people looking for answers to life’s curiosities.
So, this month, I’m trying an experiment for the readers of 3QD. Do you have any burning science-based questions that you’d like answered? Do you want to know how something works? Is there anything that you wish was just explained more clearly? If so, leave a question in the comments. I’ll solicit answers from my classmates, and get back to you next month. To help us get us started, I’ve included my own question and answer below (and yes, I stuck to the word limit –I even had two words to spare!).
Question: Why are doctors now recommending fewer screenings for breast cancer?
The idea behind breast cancer screening is simple: the sooner you find a lump, the sooner you can fight it. Until two years ago, the standard for care was frequent screenings and aggressive treatment. We were constantly on guard (yearly mammograms) and ever ready to wage surgical war (lump or breast removal). Intuitively, it made sense – root out the cancerous seed before it sprouts. Early detection should save lives, right? Not necessarily.
In 2009, an independent panel of experts appointed by the U.S. Department of Health and Human Services found that mammograms didn’t actually cut the breast cancer death rate by much: only about 15 percent. But we were screening more women than ever. So why were so many people still dying?
The problem isn’t detection: mammograms are pretty good at pinpointing the location of an abnormal cell cluster in the breast. But not all abnormal cells are cancerous, and mammograms can’t tell the harmless ones from the dangerous ones. In other words, a lump is not a lump is not a lump.
Today, doctors are divided. Some think excessive screening forces thousands of women to undergo unnecessary surgeries. Others think one life saved is worth the cost.
Monday, September 05, 2011
A Gut Feeling
by Meghan Rosen
Are you in the market for a healthy, stable, long-term relationship? Turns out you may not have to look further than your gut. Or, more specifically, the trillions of microbes that inhabit your gut. Yes, you and a few trillion life-partners are currently involved in a devoted, mutually beneficial relationship that has endured the test of time. Don’t worry though, they’ve already met your mother.
We’re exposed first to our mother’s microbial flora during birth; these are the pioneering settlers of our gastro-intestinal (GI) tract. In the following weeks our gut becomes fully colonized with a diverse array of bacteria, viruses, and fungi. Although our gut microbes are generally about an order of magnitude smaller in size than human cells, when counted by the trillions, they add up.
In fact, these intestinal interlopers (along with their fellow skin, genital and glandular neighbors) can account for up 2% of a person’s total body mass). That’s right, a 175lb man could be carrying more than 3 pounds of microbes in and on his body. Most of these microbial tenants, however, are crowded together in the lower part of his large intestine: the colon.
If we travel up the GI tract a bit and inspect the contents of the small intestine, the concentration of microbes drops nearly a billion-fold; compared to the colon, it’s practically germ free. (Although these germs are harmless when living in the gut, if the intestinal lining is breached, they won’t pass up an opportunity to spread to and wreak havoc in other areas of the body.)
While it’s easy to see the lifestyle advantages for a colon-dwelling bacterium (warm food, cozy housing, nearby relatives), the benefits and health implications for humans are not as well understood. Do we gain anything from toting around these vast microbial populations or are we merely a free meal ticket?
We know from studies in mice that gut microbes can influence health and metabolism. In fact, mice that have been delivered by cesarean section into sterile environments (and therefore lack the usual complement of intestinal microflora) are not as healthy as siblings that are birthed normally. These germ-free rodents have defective GI and immune systems compared to their microbe-ridden brothers and sisters.
While it’s clear that an animal’s gut microbes are a valuable part of a healthy intestine, their role in human metabolism and body weight remains ambiguous. We do know, however, that these microbes can enhance digestion. Normally, anything a mammal cannot digest passes through the GI tract unscathed; the energy present in this food is ‘locked up’, and therefore excreted. Obese mice, however, hold a few extra keys to calorie consumption.
The gut microbes of obese mice contain a vast array of genes that encode uncommon digestive enzymes. These enzymes help break down an expanded set of caloric compounds, and allow the mice to extract nutrients from otherwise indigestible food substances. Consequently, obese mice have fewer calories remaining in their feces than their slimmer relatives.
If obese mice have a different cohort of intestinal bacteria with super-digestive abilities, is the same true of obese humans? Is there a link between different body types and different gut microbial communities? Researchers at the Center for Genome Sciences at the Washington University School of Medicine in St. Louis, Missouri are attempting to answer these questions by comparing the identity of these gut community members, or the ‘gut microbiome’, in groups of differently sized people. Jeffrey Gordon’s lab examined fecal samples from 54 sets of adult female twins and sequenced the DNA of each and every microbe that passed through the volunteers’ intestines.
Although the majority of the twins selected for the study were identical, nearly every pair of sisters had one drastic physical difference: their body mass index. Gordon’s team of researchers specifically chose twin sets with one obese and one lean member to help understand the role of the gut microbiome in human obesity.
Although most gut microbial genes were shared between all volunteers, a significant portion of microbial genes varied from person-to-person, particularly among the obese and the lean. For instance, the obese member of a twin set generally had a gut microbiome loaded with extra genes involved in fat, carbohydrate, and protein metabolism. Are these mighty microbial metabolizers so efficient at squeezing calories from food that they actually contribute to their landlord’s obesity? Maybe, but we can’t say for sure just yet.
We do know that our gut is a kind of multi-species digestive super-organ, and that changes in the intestinal microbiome are associated with vastly different body types. In fact, Gordon’s lab has shown that you can actually fatten up a lean mouse by feeding it microbes from the guts of an obese peer. Although it’s still unclear exactly how the organisms in our intestines contribute to obesity, this research provides something for follow-up studies to chew on. Is it possible then to lose weight by dining on the gut bacteria of a skinny friend? Perhaps. Just don’t try it at home.
1. Bajzer, M and Seeley, RJ (2006, December). Obesity and gut flora. Nature, 444, 1009-1010.
2. Hord, N. G. (2008). Eukaryotic-Microbiota crosstalk: Potential mechanisms for health benefits of prebiotics and probiotics. Annual Review of Nutrition, 28, 215-31.
3. Ley, R. E., Turnbaugh, P. J., Klein, S., & Gordon, J. I. (2006). Microbial ecology: Human gut microbes associated with obesity. Nature, 444(7122), 1022-3.
4. Othman, M., Agüero, R., & Lin, H. C. (2008). Alterations in intestinal microbial flora and human disease. Current Opinion in Gastroenterology, 24(1), 11-6.
5. Sekirov, I, and Finlay BB (2006, July). Human and microbe: United we stand. Nature, 12(7), 736-737.
6. Turnbaugh, P. J., Hamady, M., Yatsunenko, T., Cantarel, B. L., Duncan, A., Ley, R. E., et al. (2009). A core gut microbiome in obese and lean twins. Nature, 457(7228), 480-4.
7. Turnbaugh, P. J., Ley, R. E., Mahowald, M. A., Magrini, V., Mardis, E. R., & Gordon, J. I. (2006). An obesity-associated gut microbiome with increased capacity for energy harvest. Nature, 444(7122), 1027-31.
Monday, August 22, 2011
The Existential Equation – The Irish Pre-famine Population and the Dilemmas of a 7 billion person world
The Irish Famine of 1846 killed more the 1,000,000 people, but it killed poor devils only. --Karl Marx, Capital Volume 1 (1867)
Behold the potato chip! It’s the perfect substrate for immersing in delicious oils, an adroit vehicle for conveying toothsome flavors to the mouth. If one eschews the oils and the suspicious flavorings, the potato is almost a complete meal in itself. Mashed along with a little buttermilk it fueled, as is claimed with some hyperbole of course, the construction of a British empire. Viewed with a squint, it is as if the Irishman with spade in hand was the subterranean potato tuber’s extended phenotype – another starchy being anxiously grubbing back into the dirt. Hundreds of thousands of potato-fed and buttery Irishmen left for Britain during the 19th Century to find employment as navvies and there they dug ditches, canals, and built a railroad system. And during and after the Great Potato Famine (1845-1849) millions more left for North America and elsewhere.
For me this is personal. Because of the enormous productivity of potato – an acre of potato producing more calories than thrice that of grain – I am now living in the US. I am, if my assessment is correct, the very last of the post-potato-famine migrant from Ireland. As soon as I left (in 1994), the exiles commenced their return, and though migration out of Ireland has begun again it is no longer, it seems to me, the same demographic pattern initiated by the failure of the potato crop.
My principle concern here is not the potato nor the Irishman nor the empire: I am interested in revisiting the demographic implications of events surrounding the Irish Potato Famine; examining the way in which economic and social historians have assessed the population growth running up to the famine before the horrible consequences of the potato failure unfolded. Let me make my main point here: nothing could be seemingly simpler to come to grips with than the pattern of a population growth in the century leading to Irish famine, and the increasing reliance of the poor on a single crop and the subsequent crash of the population after the failure of the crop. And yet despite the beguiling but horrifying simplicity of the pattern almost no aspect of the story is as easy to explain as it may seem. To keep this post to modest length I am discussing only the debates over causes of population growth before the famine here and will post follow up comments on my blog in the coming months about the population disaster that followed the potato failure – another complicated story.
Before assessing the pre-famine population patterns a word or two on the potato itself. The potato (Solanum tuberosum) is an annual herbaceous dicotyledonous plant that produces a carbohydrate- and protein-rich edible tuber (underground storage stem). As an annuals herb, the potato has much in common with several weedy species. The plant is a member of the family Solanaceae and thus is related to several other cultivated plants: tomatoes and peppers, for instance. Indeed, an Irish person outside a pub with a potato chip (or “crisp” as it is called in Ireland) in one hand, and a cigarette in the other is enjoying the dubious benefits of two members of the Solanaceae. Potatoes were first domesticated in highlands of Bolivia and Peru and were introduced into Europe by Spanish explorers in the late sixteenth century. The potato made the return journey to the New World from Europe in 1791 being supposedly introduced to the US from Ireland.
The climatic conditions that make Ireland a slight misery to live in permit potatoes to thrive – cool temperatures, overcast skies and perpetually moist soils are ideal for the crop.
The potato follows rice, wheat, and corn, in supplying calories to the human population. Besides being scrumptious, a potato supplies a good balance of the essential amino acids. They also are a source of B vitamins and vitamin D and C. Potato also contains a host of micronutrients, most of which are found closely below the skin – I encouraged you all to eat your spuds with their jackets on. If you peel them the skins can be fed to the pig that you might be fattening up to sell for rent (or at least in pre-famine Ireland this would be the recommendation and was the standard practice). The high productivity of potatoes in tiny land-spaces contributed to its rapid adoption as part of the Irish agricultural practice and diet. At a time of rising populations the potato was the perfect crop – the higher the population the greater the dependence on the potato and the potato in turn facilitated a further rise in population. Both species contributed to each other’s success. And the collapse of one lead to the collapse of the other.
There is little in dispute about the proximate cause of the Irish post-famine population decline – the almost exclusive dependence of a relatively vast Irish population on a single crop whose failure resulted in starvation, death, and emigration of the Irish. Beyond these horrifying and indisputable generalizations there is little agreement on other issues associated with the Great Famine. The exact contribution of flawed land policy and landlordism in the run up to the famine, the degree to which the political response contributed to the exacerbating or relief the famine, even, to some extent, the estimates of death (ranging from half a million to well over 1 million), are all still contentiously debated. The rise of the Irish population before the Great Famine, the main concern of this little piece, has also attracted some scholarly attention, and though the pattern seems comparatively straightforward, the theories explaining the demographic situation are also contentious.
So, the population of Ireland in the year 1800 was 3.8 million. The data is not completely reliable, but the patterns are very clear. At the eve of the famine it had risen to incredible 8.1 million! The accompanying graph, based upon the census returns of 1821, ’31 and ’41 illustrates just how rapid this was (I reconstructed these based upon the census returns for Ireland that can be found at www.histpop.org.) Irish growth rates were in fact the highest in Europe at the time, though just before the famine the growth rates seemed to have declined to 0.9% per annum.  It was as if the population bow had been drawn to its limits and the arrow of disaster was poised for release.
This rapid period of population growth was not just an Irish phenomenon, it had occurred throughout Europe though at comparatively slower rates. To represent such growth mathematically requires little in the way of computational finesse: populations grow when birth rates exceed death rates. Despite the delicious tractability of the basic population model, after all it can be expressed as ∆P=B-D (change in population = births – deaths), the genius of the human is to transform the simple factors B and D into everything that gives our lives meaning. All that’s beautiful and terrifying is embedded in this most existential of equations. Population grows when any combination of events result in birth being more prevalent than death; so even if mortality rates increase as long as more kids are born into the misery, populations continue to grow.
So what was going on it Europe in the 18th and 19th centuries that resulted in rapid population increases? This period of rapid growth, though it was not the first period of growth in human population history, is significant in being the one that marked a beginning of the modern population spurt whose outcome is today’s global population. Between the end of the 18th century when the global population was 1 billion and today the world’s population ballooned to 7 billion. The most plausible hypothesis concerning the origins of this contemporary growth spasm is that during the period mortality rates declined, and though in some cases birth rates also declined mortality rates, crucially, declined at a faster rate that the birth rate decline. This difference between mortality and fertility opened a “gap” between births and deaths and the population as a consequence increased. To be clear, postponing death, which is inarguably happy news, has consequences.
The reduced prevalence of infectious diseases was a main contributor to the decline in mortality. Contributing to the decline in infectious diseases were improved diets, sanitary reform and an altered relation between infectious agents and the human host. Many demographers are adamant that the declining mortality during this period was not related to medical genius. Medical knowledge at the time did not extend to comprehensive knowledge of major infections killers of the day. There is evidence for a spontaneous decline in some of the historical mass killers – scarlet fever, for instance – but this is not enough to explain the sharp decline of mortality.
The evidence of a role for improved nutrition in contributing to mortality decline is solidly founded. The better fed and healthier European of the 19th century derived their good fortune from newly emerged agricultural technologies, ones based upon better conservation of soil fertility and more sophisticated ecological knowledge of crops diversity. The diversification of crops was important – besides averting the sort of disaster awaiting Ireland in the 1840s, new crops in Europe ensured a reliable supply of food year round. The potato was the absolute king of the root crops, but turnips, beets, carrots and parsnips were planted. These roots also provide feed from livestock in the winter, increasing the amount of meat available for consumption or sale. It seems a little obvious to underscore it, but improved food quality and greater availability of calories are crucial to sustaining a population – and even if these factors don’t inexorably lead to population growth, they are necessary for it. More positively, the role of food quality and availability on reduced mortality rates contributes to population growth as long as birth rates are relatively unaffected.
Now, speculation about the factors contributing to the growth of populations during the 18th century were developed primarily through a detailed examination of the records of births and deaths in England and Wales, but do the patterns hold for the remainder of Europe? Peter Razzell, a noted population historian, remarked that the Irish population lagged for almost a century after the potato became a commonplace crop in that country and thus cautions us not to expect the generalizations to hold true outside of Britain. The case that something quite different was going on in Ireland from a population perspective was systematically made by Professor Ken Connell, a professor at Queen’s University Belfast, over 60 years ago.  Life in Ireland was so different from Britain that surely it could not be generated by the same demographic mechanisms. Since in Ireland several of the factors that reduced mortality in Britain may not apply, Connell argued that Ireland’s population grew by the only other way populations can – increased fertility. Prior to the 19th Century marriage was postponed until the death of the father by which time “the son was no longer a stripling” – thus later marriages were the norm. As the population grew in Britain, the incentive for Irish farmers to provide food for British market grew, and this along with other more local Irish factors provided an incentive for the further subdivision of land holdings, which did indeed become more prevalent. All of this was fueled by the productivity of the potato! A postage-stamp sized farm worked by a manling, his child-bride, and their growing brood could be sustained by potatoes. Since they were married longer, Irish women were exposed for a longer period to childbirth – though the evidence is equivocal on whether this did in fact translate into higher fertility among Irish women.
From this perspective the potato’s main crop was that of healthy cheap labour, and this inexpensively produced Irish laborer allowed landlords to subdivide their properties and maximize their rents.
Professor Connell’s case for Irish exceptionalism seems less secure these days than it did back when he was writing. Connell had been a pioneer of Irish social and economic history and chaired his department at Queen's for a while. A querulous sort, apparently he did not get along well with his colleagues and was removed from his leadership role. He died on 26 September 1973, aged fifty-six, embroiled in a number of controversies and “exhausted and dispirited”. Michael Drake’s paper, Marriage and Population Growth in Ireland, 1750-1845, published in 1963, challenged the statistical basis of Connell’s account, and though subsequently Connell’s thesis remained frequently cited by other scholars it was often to caution against or at the very least comolicate his conclusion.  In 1974 Drake wrote an obituary for Connell who died the year before at age 56 in which he praised him for writing “the first major study of the determinants of population growth in pre-industrial societies to emerge since the 1920s”, and credited him with initiating a much closer scrutiny of this phenomenon. The major criticism he said was that Connell “generalised too widely”. Drake concluded on this sad note: “Certainly in all the years I knew him he budged but little on any issue. Perhaps if he could have done so on those often seemingly trivial non-academic issues which troubled him so much, especially in recent years, he would be with us still.” On a cheerier note Joel Mokyr of Northwestern University (whose office is a few blocks from where I write) and Cormac Ó Gráda from University College Dublin (whose office was a few buildings away from the lab where I worked in the late 1980s) concluded a more recent review of populations with the comment: “Post-famine demographic patterns have fascinated and puzzled researchers too, but it must be said that as yet they have not produced a Connell. As for the period surveyed here, three decades of debate have not exhausted the questions raised by Connell.”
In more recent analysis the point is conceded that despite anecdotal evidence to the contrary the age of marriage in Ireland was not impressively early in Ireland and was closer to the norm for Europe. There is, however, some evidence that marital fertility was greater in Ireland than in Britain. Though there is little hard data to base it upon, the Irish seemed not to have inclined towards the use of any contraceptive strategies even when they knew about them. Charmingly, Irish women of that time were complemented for their chastity and marital fidelity.  To add to the growing thicket of factors contributing to the rapid growth of the Irish population before the famine Jona Schellekens, from Hebrew University of Jerusalem, suggested that marital fertility may have been caused by improved nutrition but also by changes in “the pattern of breastfeeding linked with potato cultivation provide a plausible hypothesis.” 
Can the Irish Great Famine be used as a microcosm for contemplating the potential fate of the world’s population as it surges past 7 billion in the months ahead? After all, as was true in Ireland before the famine, the world has run up its populations impressively since the early 1800s and will a mere couple of centuries later reach 7 billion this autumn. Are we heading, as many environmental thinkers have implied, for a collapse? Was Ireland's famine a predicable Malthusian disaster as some have claimed – a case of a population outstripping its resources? I leave these as open questions for now as I suspect in the months ahead we will be encouraged to reflect upon them. There is a cottage industry of speculation about the degree to which the Irish situation was a Malthusian disaster (I’ll review some of this on my blog). For now, all I want to say in this: Despite the seeming tractability of population issues (growth = births – deaths), it is pretty clear that dissecting the particulars of any one story – in this instance, the simple pattern of population growth on a small damp island before a major famine – it is rarely possible to fully understand the mechanisms driving the pattern. This is precisely because growth models embed such existential matters; motivations lofty and iniquitous, deliberate and capricious, contribute to the births and deaths of humans. And we are a long way from understanding the human condition, or its reflection in the patterns of our births and deaths.
A final thought: Quite a few years ago I invited some close friends over to watch Jude, Michael Winterbottom’s version of Hardy’s novel Jude the Obscure. I had read the book with enormous relish as a teenager in Dublin and had remembered it for its compelling tale of Jude’s desire to be a classics scholar, thinking it in some ways to reflect my own situation. I urged this tale of scholarly ambition on some dear friends. In my callowness I had forgotten a central scene where Jude’s disturbed son murders Sue’s (Jude’s beloved) two children and then hangs himself. The note he leaves for Jude read, "Done because we are too menny” [sic]. As this horrifying scene unfolded on the TV one of our guests started to quietly sob and after a while her husband was obliged to carry his inconsolable wife off to their car. All I could say in pitiable defense was that I had forgotten.
Not to be too melodramatic, but in the months ahead when the now staggering size of the global population is discussed, and we are again invited to contemplate if we are globally too “menny”, recall that though populations are stabilizing in some regions, they are not in other generally poorer countries, and that the patterns of population growth and decline are only approximately well understood. We tend not to be very good at projecting the numbers out too far into the future. Those who fear that the population bow is being pulled globally tight and that disaster is being drawn from the quiver (and starvation is not the only arrow) should not be mollified by confident-sounding predictions that population stabilization is in our near future – perhaps it is, perhaps it is not, we simply cannot be sure. The only thing that seems sure is that if populations stability is deemed desirable we must, to paraphrase population theorist Joel Cohen, be “ready, willing, and able” to determine our own fertility. An expectation that the existential equation ∆P=B-D will crank out uncomplicated results is historically poorly grounded.
 J. Creighton Miller, Jr., H. David Thurston, "Potato, Irish," in AccessScience, ©McGraw-Hill Companies, 2008,
 Joel Mokyr and Cormac Ó Gráda (1984) New Developments in Irish Population History, 1700-1850 The Economic History Review, New Series, Vol. 37, No. 4, pp. 473-488
 Thomas McKeown, R. G. Brown and R. G. Record (1972) An Interpretation of the Modern Rise of Population in Europe. Population Studies Vol. 26, No. 345-382.
 K. H. Connell (1951) Some Unsettled Problems in English and Irish Population History, 1750-1845 Irish Historical Studies Vol. 7(28): 225-234
 C. J. Woods. (2009)”Connell, Kenneth Hugh”. Dictionary of Irish Biography. (Eds.)James Mcguire, James Quinn. Cambridge, United Kingdom:Cambridge University Press.
 Michael Drake (1963) Marriage and Population Growth in Ireland, 1750-1845 The Economic History Review Vol. 16, No. 2 (1963), pp. 301-313
 See Joel Mokyr and Cormac Ó Gráda for details/
 Jona Schellekens (1995) The Role of Marital Fertility in Irish Population History, 1750-1840. The Economic History Review, New Series, Vol. 46, No. 2 (May, 1993), pp. 369-378, p377
Monday, August 15, 2011
Globalization / Human Reason
by Wayne Ferrier
Psychiatrists and psychologists have come to the rational conclusion that man is incapable of coming to a rational conclusion. To a certain extent there may be some truth to this. While we are still in the beginning stages of understanding our own minds, we do have three or four good theories on how our mind operates—though we are far from a comprehensive holistic understanding.
All in all many, if not most instances, of reasoning in man is what we call bounded rationality. Bounded rationality holds that when making decisions, the rational thought of individuals is limited by what information is available to them at the time they make decisions, the cognitive limitations of their minds, and the finite amount of time before a decision has to be made. Another way to look at bounded rationality is that, because decision-makers lack the ability and resources to arrive at an optimal solution, they instead simplify the choices available to them. Thus the decision-maker seeks a satisfactory solution rather than an optimal one.
In nature an animal that hesitates and remains indecisive is at a disadvantage to quicker thinking individuals—a deer stunned by car highlights too many times is not likely to survive very long. It makes sense that there are selective pressures from the environment to mold species capable of making decisions based on just a few facts and then choosing a decisive plan of action. Man is such an creature.
Besides bounded rationality, it is also held that man possesses a theory of mind. This is the idea that an individual understands that others may have a view of the world that differs from their own, or even that other's concepts of the world might be fallacious. Among social animals there may be an advantage to individuals that understand that others may not have all the facts and that they can be misled and deceived. And while this is a simplification of a theory of mind and perhaps not everything about this ability need be perceived negative, a theory of mind gives an individual the capability of deception, hence manipulation of others for the benefit of the self over others.
Recently some researchers are suggesting that reason evolved not to understand truth or even reality but that our reasoning ability evolved for the sole purpose to win arguments. Human rationality may be just the impulse to win debates. According to this view, bias and illogic are social adaptations that enable one to persuade and defeat others in arguments—certitude being more important than what the truth may actually be.
This theory of argumentation is strongly tied to well-known and long held concepts of human thought and behavior, in particular cognitive dissonance. Cognitive dissonance is when people are biased to think their choices are correct, in spite of overwhelming evidence to the contrary that they are not.
So when you add it up: bounded reason (quick decisions based on limited information); theory of mind and the view that others don't have all the facts and are thus fallible and fool-able; cognitive dissonance (that in spite of evidence to the fact that we might be wrong we think we are right); holding onto incorrect views of the world in spite of the facts; regardless of all this we argue on, purposefully filtering out contrary evidence and valuable information just so we can hold onto our cherished positions and manipulate others.
It is unfortunate but exceedingly interesting that decision-makers often adhere to immobile positions irrespective of the facts. But the adversarial-argumentative approach is a lose-lose proposition most of the time.
Science, which is supposed to be based on empiricism rather than on a priori reasoning, intuition, or revelation, is an ideal solution to the adversarial-only approach, but many so-called scientific voices are not trustworthy. For example, consider the argument concerning climate change. Engaging the scientific community in a discussion about climate change more often than not degenerates into Ad Hominems such as: “You're not scientific if you don't believe in global warming,” or “If you don't believe in global warming you probably don't believe in the law of gravity either.” Often instead of producing facts, we are told the time for discussion is over, that global warming is real and we need to act now, not question it anymore. Global warming is the only hypothesis in science, it seems, that skipped right over becoming a theory and somehow became a physical law in just a few short decades of research.
Major problems worldwide
Now here is a list of real problems, which I feel that bounded reason, cognitive dissonance, and irrational arguing are unlikely to solve; yet we need to address these issues if we are to survive and thrive as a species. Each problem is factual and is integrally entwined with the others, so that each problem affects all the others in a web of complexity. Our current system of thinking is inadequate to solve any of them satisfactory alone, let alone each woven together.
Regardless of climate change, global warming or no global warming, climate affects weather and weather affects agriculture. We no longer have a worldwide food surplus, in part because of bad weather and in part because of overpopulation. The rise of China and India and other countries has eaten into our surplus and just a season or two of bad weather has sparked ugly situations such as the Arab Spring. In addition, crisis such as the Arab Spring reflect intolerable social inequality, as well as the possibility of impeding starvation due to crop failure. Like dominoes the Arab Spring caused a rise in oil prices affecting western countries dependent of foreign oil. So here in the West we're feeling high fuel prices, high food prices, chronic unemployment, social inequality, and ineffectual government—and the weather isn’t fun either! So what causes all this? Simple, overpopulation; overpopulation can be tried to just about any major modern problem: disease, famine, habitat destruction, pollution, war, you name it; and if you believe in climate change via man-produced carbon emissions, also the weather.
Globalization and economy
It is getting hard to ignore that the time of nations is ending. It has been clear for a while that no nation exists on its own anymore, and what happens to one country affects everyone else. The United States never got out of WWII mode, it went right into the Cold War, and when that ended got into a lengthy and expensive war against terrorism; in that sixty-five year period, it ignored the creation of meaningful employment and living wage jobs. Instead it focused on supporting rising inequality and it failed to fix its malfunctioning educational system, which has left America's workforce poorly educated, unemployed or marginally employed, many are living in substandard housing, on the streets, in prison, or are just a paycheck away from living in the streets or in prison. How long do we wait before we have an American Spring? When the American economy finally collapses, so goes the rest of the world.
Inequality, low or no wage employment is leading to a massive brain drain in America. Then there is the crumbling infrastructure—left untouched since the brief economic boom after WWII. Here are meaningful jobs but nothing is being done.
In the meantime all we do is argue, but we don't listen, nor do we analyze.
A snippet from the NPR Radio Program WAIT WAIT. . .DON'T TELL ME says it best:
PETER SAGAL: It turns out, the reason human beings developed intelligence was not to be better hunters or better survive against other species, but to win arguments. See, the thing that has always puzzled people about human intelligence, how humans got so smart, is why humans are still so stupid.
(Soundbite of laughter)
PETER SAGAL: Because we continually believe things that are incorrect and behave irrationally. And so people evolved, it turns out, the ability to convince themselves they were right even when they were full of it. You see, that's the explanation.
MO ROCCA: That's interesting.
FAITH SALIE: Does this mean that politicians are the most evolved among us?
PETER SAGAL: Exactly.
(Soundbite of laughter)
Monday, August 01, 2011
Kipple and Things: How to Hoard and Why Not To Mean
This paper (more of an essay, really) was originally delivered at the Birkbeck Uni/London Consortium ‘Rubbish Symposium‘, 30th July 2011
Living at the very limit of his means, Philip K. Dick, a two-bit, pulp sci-fi author, was having a hard time maintaining his livelihood. It was the 1950s and Dick was living with his second wife, Kleo, in a run-down apartment in Berkley, California, surrounded by library books Dick later claimed they “could not afford to pay the fines on.”
In 1956, Dick had a short story published in a brand new pulp magazine: Satellite Science Fiction. Entitled, Pay for the Printer, the story contained a whole host of themes that would come to dominate his work
On an Earth gripped by nuclear winter, humankind has all but forgotten the skills of invention and craft. An alien, blob-like, species known as the Biltong co-habit Earth with the humans. They have an innate ability to ‘print’ things, popping out copies of any object they are shown from their formless bellies. The humans are enslaved not simply because everything is replicated for them, but, in a twist Dick was to use again and again in his later works, as the Biltong grow old and tired, each copied object resembles the original less and less. Eventually everything emerges as an indistinct, black mush. The short story ends with the Biltong themselves decaying, leaving humankind on a planet full of collapsed houses, cars with no doors, and bottles of whiskey that taste like anti-freeze.
In his 1968 novel Do Androids Dream of Electric Sheep? Dick gave a name to this crumbling, ceaseless, disorder of objects: Kipple. A vision of a pudding-like universe, in which obsolescent objects merge, featureless and identical, flooding every apartment complex from here to the pock-marked surface of Mars.
“No one can win against kipple,”
“It’s a universal principle operating throughout the universe; the entire universe is moving toward a final state of total, absolute kippleization.”
In kipple, Dick captured the process of entropy, and put it to work to describe the contradictions of mass-production and utility. Saved from the wreckage of the nuclear apocalypse, a host of original items – lawn mowers, woollen sweaters, cups of coffee – are in short supply. Nothing ‘new’ has been made for centuries. The Biltong must produce copies from copies made of copies – each replica seeded with errors will eventually resemble kipple.
Objects; things, are mortal; transient. The wrist-watch functions to mark the passing of time, until it finally runs down and becomes a memory of a wrist-watch: a skeleton, an icon, a piece of kipple. The butterfly emerges from its pupae in order to pass on its genes to another generation. Its demise – its kipple-isation – is programmed into its genetic code. A consequence of the lottery of biological inheritance. Both the wrist-watch and the butterfly have fulfilled their functions: I utilised the wrist-watch to mark time: the ‘genetic lottery’ utilised the butterfly to extend its lineage. Entropy is absolutely certain, and pure utility will always produce it.
In his book Genesis, Michel Serres, argues that objects are specific to the human lineage. Specific, not because of their utility, but because they indicate our drive to classify, categorise and order:
“The object, for us, makes history slow.”
Before things become kipple, they stand distinct from one another. Nature seems to us defined in a similar way, between a tiger and a zebra there appears a broad gap, indicated in the creatures’ inability to mate with one another; indicated by the claws of the tiger and the hooves of the zebra. But this gap is an illusion, as Michel Foucault neatly points out inThe Order of Things:
“…all nature forms one great fabric in which beings resemble one another from one to the next…”
The dividing lines indicating categories of difference are always unreal, abstracted from the ‘great fabric’ of nature, and understood through human categories isolated in language.
Humans themselves are constituted by this great fabric: our culture and language lie on the same fabric. Our apparent mastery over creation comes from one simple quirk of our being: the tendency we exhibit to categorise, to cleave through the fabric of creation. For Philip K. Dick, this act is what separates us from the alien Biltong. They can merely copy, a repeated play of resemblance that with each iteration moves away from the ideal form. Humans, on the other hand, can do more than copy. They can take kipple and distinguish it from itself, endlessly, through categorisation and classification. Far from using things until they run down, humans build new relations, new meanings, carefully and slowly from the mush. New categories produce new things, produce newness. At least, that’s what Dick – a Platonic idealist – believed.
At the end of Pay for the Printer, a disparate group camp in the kipple-ised, sagging pudding of a formless city. One of the settlers has with him a crude wooden cup he has apparently cleaved himself with an even cruder, hand-made knife:
“You made this knife?” Fergesson asked, dazed.
“I can’t believe it. Where do you start? You have to have tools to make this. It’s a paradox!”
In his essay, The System of Collecting, Jean Baudrillard makes a case for the profound subjectivity produced in this apparent newness.
Once things are divested of their function and placed into a collection, they:
“…constitute themselves as a system, on the basis of which the subject seeks to piece together [their] world, [their] personal microcosm.”
The use-value of objects gives way to the passion of systematization, of order, sequence and the projected perfection of the complete set.
In the collection, function is replaced by exemplification. The limits of the collection dictate a paradigm of finality; of perfection. Each object – whether wrist-watch or butterfly – exists to define new orders. Once the blue butterfly is added to the collection it stands, alone, as an example of the class of blue butterflies to which the collection dictates it belongs. Placed alongside the yellow and green butterflies, the blue butterfly exists to constitute all three as a series. The entire series itself then becomes the example of all butterflies. A complete collection: a perfect catalogue. Perhaps, like Borges’ Library of Babel, or Plato’s ideal realm of forms, there exists a room somewhere with a catalogue of everything. An ocean of examples. Cosmic disorder re-constituted and classified as a finite catalogue, arranged for the grand cosmic collector’s singular pleasure.
The problem with catalogues is that absolutely anything can be collected and arranged. The zebra and the tiger may sit side-by-side if the collector is particularly interested in collecting mammals, striped quadrupeds or – a particularly broad collection – things that smell funny. Too much classification, too many cleaves in the fabric of creation, and order once again dissolves into kipple. Disorder arises when too many conditions of order have been imposed.
“[W]e must think of chaos not as a helter-skelter of worn-out and broken or halfheartedly realised things, like a junkyard or potter’s midden, but as a fluid mishmash of thinglessness in every lack of direction as if a blender had run amok. ‘AND’ is that sunderer. It stands between. It divides light from darkness.”
Collectors gather things about them in order to excerpt a mastery over the apparent disorder of creation. The collector attains true mastery over their microcosm. The narcissism of the individual extends to the precise limits of the catalogue he or she has arranged about them. Without AND language would function as nothing but pudding, each clause, condition or acting verb leaking into its partner, in an endless series. But the problem with AND, with classes, categories and order is that they can be cleaved anywhere.
Jorge Luis Borges exemplified this perfectly in a series of fictional lists he produced throughout his career. The most infamous, Michel Foucault claimed influenced him to write The Order of Things, the list refers to a “certain Chinese encyclopaedia” in which:
Animals are divided into
- belonging to the Emporer,
- sucking pigs,
- stray dogs,
- included in the present classification,
- drawn with a very fine camelhair brush,
- et cetera,
- having just broken the water pitcher,
- that from a long way off look like flies…
In writing about his short story The Aleph, Borges also remarked:
“My chief problem in writing the story lay in… setting down of a limited catalog of endless things. The task, as is evident, is impossible, for such a chaotic enumeration can only be simulated, and every apparently haphazard element has to be linked to its neighbour either by secret association or by contrast.”
No class of things, no collection, no cleaving of kipple into nonkipple can escape the functions of either “association OR contrast…” The lists Borges compiled are worthy of note because they remind us of the binary contradiction classification always comes back to:
- Firstly, that all collections are arbitrary
- and Secondly, that a perfect collection of things is impossible, because, in the final instance there is only pudding “…in every lack of direction…”
Human narcissism – our apparent mastery over kipple – is an illusion. Collect too many things together, and you re-produce the conditions of chaos you tried so hard to avoid. When the act of collecting comes to take precedence over the microcosm of the collection, when the differentiation of things begins to break down: collectors cease being collectors and become hoarders. The hoard exemplifies chaos: the very thing the collector builds their catalogues in opposition to.
To tease apart what distinguishes the hoarder, from the collector, I’d like to introduce two new characters into this arbitrary list I have arranged about myself. Some of you may have heard of them, indeed, they are the brothers whom the syndrome of compulsive hoarding is named after.
Brothers, Homer and Langley Collyer lived in a mansion at 2078, Fifth Avenue, Manhattan. Sons of wealthy parents – their father was a respected gynaecologist, their mother a renowned opera singer – the brothers both attended Columbia University, where Homer studied law and Langley engineering. In 1933 Homer suffered a stroke which left him blind and unable to work at his law firm. As Langley began to devote his time entirely to looking after his helpless brother, both men became locked inside the mansion their family’s wealth and prestige had delivered. Over the following decade or so Langley would leave the house only at night. Wandering the streets of Manhattan, collecting water and provisions to sustain his needy brother, Langley’s routines became obsessive, giving his life a meaning above and beyond the streets of Harlem that were fast becoming run-down and decrepid.
But the clutter only went one way: into the house.
On March 21st 1947 the New York Police Department received an anonymous tip-off that there was a dead body in the Collyer mansion. Attempting to gain entry, police smashed down the front-door, only to be confronted with a solid wall of newspapers (which, Langley had claimed to reporter’s years earlier his brother “would read once his eyesight was restored”.) Finally, after climbing in through an upstairs window, a patrolman found the body of Homer – now 65 years old – slumped dead in his kippleised armchair. In the weeks that followed, police removed one hundred and thirty tons of rubbish from the house. Langley’s body was eventually discovered crushed and decomposing under an enormous mound of junk, lying only a few feet from where Homer had starved to death. Crawling through the detritus to reach his ailing brother, Langley had triggered one of his own booby traps, set in place to catch any robbers who attempted to steal the brother’s clutter.
The list of objects pulled from the brother’s house reads like a Borges original. FromWikipedia:
Items removed from the house included baby carriages, a doll carriage, rusted bicycles, old food, potato peelers, a collection of guns, glass chandeliers, bowling balls, camera equipment, the folding top of a horse-drawn carriage, a sawhorse, three dressmaking dummies, painted portraits, pinup girl photos, plaster busts, Mrs. Collyer’s hope chests, rusty bed springs, a kerosene stove, a child’s chair, more than 25,000 books (including thousands about medicine and engineering and more than 2,500 on law), human organs pickled in jars, eight live cats, the chassis of an old Model T Ford, tapestries, hundreds of yards of unused silks and fabric, clocks, 14 pianos (both grand and upright), a clavichord, two organs, banjos, violins, bugles, accordions, a gramophone and records, and countless bundles of newspapers and magazines.
Finally: There was also a great deal of rubbish.
A Time Magazine obituary from April 1947 said of the Collyer brothers:
“They were shy men, and showed little inclination to brave the noisy world.”
In a final ironic twist of kippleisation, the brothers themselves became mere examples within the system of clutter they had amassed. Langley especially had hoarded himself to death. His body, gnawed by rats, was hardly distinguishable from the kipple that fell on top of it. The noisy world had been replaced by the noise of the hoard: a collection so impossible to conceive, to cleave, to order, that it had dissolved once more to pure, featureless kipple.
Many hoarders achieve a similar fate to the Collyer brothers: their clutter eventually wiping them out in one final collapse of systemic disorder.
But what of Philip K. Dick....?
In the 1960s, fuelled by amphetamines and a debilitating paranoia, Dick wrote 24 novels, and hundreds of short stories, the duds and the classics mashed together into an indistinguishable hoard. UBIK, published in 1966, tells of a world which is itself degrading. Objects regress to previous forms, 3D televisions turn into black and white tube-sets, then stuttering reel projectors; credit cards slowly change into handfuls of rusted coins, impressed with the faces of Presidents long since deceased. Turning his back for a few minutes a character’s hover vehicle has degraded to become a bi-propeller airplane.
The Three Stigmata of Palmer Eldritch, another stand-out novel from the mid 60s, begins with this memo, “dictated by Leo Bulero immediately on his return from Mars”:
“I mean, after all; you have to consider we’re only made out of dust. That’s admittedly not much to go on and we shouldn’t forget that. But even considering, I mean it’s a sort of bad beginning, we’re not doing too bad. So I personally have faith that even in this lousy situation we’re faced with we can make it. You get me?”
Monday, July 25, 2011
Brain, liquefaction of
The following is an excerpt from my unpublished manuscript “A Shorter History of Bodily Fluids”
Brain, liquefaction of: also known as encephalomalacia (from the Greek, μαλακία softening), necrencephalus (from Greek, νεκρο + κεϕαλή deadhead), ramollissement cérébral (from the French ramollissement cérébral), cerebromalacia (from the Greek, μαλακία a colloquial onanist, esp a vehicular onanist; cf blood, semen), cerebral softening (from the Old English soft meaning soft), or more commonly, softening of the brain (pronounced US /breɪn/). When the tissue affected is white matter it is called leukoencephalomalacia; polioencephalomalacia refers to necrosis of the gray matter. This condition may manifest as multiple necrotic fluid-filled cavities replacing healthy brain tissue. It is preferable to inspect this necrosis post-mortem especially if attempting to administer home remedies. If you are a sheep the following suite of symptoms will be diagnostically useful in identifying brain liquefaction: somnolence, short sightedness, ataxia (poor coordination), head pressing, tumblesaulting, walking in circles, walking bipedally, excessive bleating or bleating in prime numbers, and terminal coma. I treated a mouse once that after a fall complained to me that she could only walk in circles. It greatly affected her travel plans and she died penniless, vastly undereducated, and living very close to where she was born.
If after munching on yellow star thistle (Centaurea solstitialis) you become excessively sleepy or find yourself given to aimless wandering and go off your feed, you might be a horse. Unfortunately you also have a condition called nigropallidal encephalomalacia. Avoid prehending Russian Knapweed. If you are a chicken and have ataxia, paralysis, severe softening of the brain, and are brooding excessively on death you have “crazy chick disease”. Take vitamin E capsules with your feed and avoid gassy foodstuffs. Rhinoceroses should also remember to regularly get their vitamin E levels assessed; consider doing so even between regular checkups. If you are a Rhinoceros be vigilant for signs of depression; if you are feeling down, just pop in to your vet. If your condition has progressed to coma, its best to have him visit you.
Clinical notes of liquefaction of the brain
Fragment from the journal of Dr K, of Naumburg
“I had a patient today (to protect his anonymity I will refer to him as Master F Nietzsche) who presented with headaches. Friedrich is 18. He is a squat young man, moody and diffident; short sighted in one eye, long-sighted in the other. The locations of his headaches are worth remarking; one of them was on his glabella one of the supraorbital processes, another very thin headache runs along the coronal suture, one on the patellar grove, and there is a persistent one above his pronounced ischial callosities. N complains of cephalalgia throughout his body. He is also suffering from a great despondency which expressed itself in a fixed stare and excessive sighing. Apparently his father went blind and wasted away, dying young from liquefaction of the brain. He fears this same fate. I recommended a companion animal to him but he muttered that his dog was already dead, or was it that the log is painted red? I prescribed fresh air, a moustache, and morose meditation.” (translation mine)
The ramollissement of Mr P
I had occasion to work quite recently with William Madden, MD, Physician of the Torbay Infirmary and Dispensary on the following fascinating case of ramollissement of the grey matter of the medulla. Our patient, Mr P came under our care in the late summer of 1838. Mr P had engaged in heavy drinking with some rowdy boys, greedily joining in on their excessive imbibitions. After this he developed a burning pain on the instep of his left foot. He lost much of the feeling in the ailing foot and the lower part of the leg. When he walked it felt as though he were walking upon “heaps of warm bran.” After a chilly journey to Roslin a few miles from his home his face stiffened on the side closest to the carriage window. Dr Madden and I prescribed the following usually very efficacious cures: bleeding, blistering of the head and spine, and severe purgation – these continuing for several days, ceasing only when Mr P partially lost his vision. Naturally enough we tried galvanism though I am not inclined to inform you how much we shocked the ailing man as Dr Madden and I disagreed on precisely this point. Alas after six tries Mr P abandoned the cure. He also refused more bleeding. His family reported that he was becoming increasingly irritable and burdensome at home. His bowels remained open and his stools loose but not excessively so (cf. Stool, runny). As the days wore on the pain increased and the patient’s arms were in constant motion. We bled him, draining him to the point that his pulse dropped and then administered a purgative to his unwilling bowels. He slept poorly but his bowels were productive. We bled him, and bled him again. Finally the sensations came back to his feet after which Mr P died. The sectio cadaveris performed forty-two hours after death revealed that the ventricles were distended with fluid, with much of it spilling over into the spinal canal. Other parts of the brain were pulpy. The center of the spinal cord had become completely fluid.
A case of brain shrinkage and liquefaction
During the post-mortem examination of a Mr S I found that when I sawed open his head there was a very significant quantity of clear serum on the surface of the brain. I had treated this man alongside Dr Thomas Nunnelley. You probably know Nunnelley as the surgeon to the Leeds General Eye and Ear Infirmary. Mr S suffered from wakeful nights and complained of heat in his head. After he was seized by a fit in September 1841 Dr Nunnelley and I suspected acute liquefaction of the brain. The patient was cupped, leeched, blistered, and administered mercurous chloride, henbane with camphor, and strychnine. Naturally, he improved. Little changed in his condition with the exception of the growing offensiveness of his language, something he was not inclined towards when in good health. Additionally he took to yelling out “Oh dear! Oh dear!” or would occasional mutter to the servants “Is there Mary”, or “What do you say Charles”. I am reminded here of the case reported to me from my colleague Dr G of Genoa who related that as the Irish leader Daniel O’Connell lay dying of softening of the brain he repeatedly murmured “Jesus…Jesus…Jesus…”. The “Liberator” and Member of Parliament for Dublin died in 1847 a year after Mr S. To continue, Mr S’s bowels were constipated. After his fit he lingered for two years and died in his chair. As I said, when I examined him postmortem the surface of the brain was excessively wet. When I dissected the hemisphere I found the ventricles distended with serum and the lining of the ventricles were pultaceous. I have never seen such a small cerebellum. I did not have an opportunity to weigh this organ.
A note on sources
I am especially indebted to my former student, the late Professor E Z, whose magisterial General and Special Pathology, originally published in 1881, usefully synthesized our current clinical knowledge of the liquefaction of necrotic tissue. Z was Professor of Pathology in the University of Freiburg; before this he was Chair of Pathology and Morbid Anatomy in the University of Zurich and later at Tubingen. Beloved by his students, his specialty was in “tubercle” and in the cellular nature of the inflammation. Another discovery of Z’s: “All life”, he said, “comes soon or later to an end – to death.” [Emphasis Z’s]. This fact I suppose was well enough known before this time; science, however, often calls for the bold statement of the obvious. Yet another insight of Professor Z’s “When death occurs prematurely…it must be regarded as a pathological phenomenon.” At the time of Z’s death we were working up our autopsy notes on the case of a retired philologist from the University of Basel. This man had gained some notoriety as a philosopher-poet. Our philologist had lapsed into a demented silence after his 1889 collapse in Turin, and had eventually died on August 25th 1900 after a series of apopleptic fits. Though tertiary cerebral syphilis was suspected, Drs Binswanger and Ziehen, the philologist’s physicians, contrary to the desire of his sister, requested a post-mortem confirmation of the diagnosis. Alas, our dear Professor Z died in Freiburg at aged 56 before we completed the manuscript. The location of the autopsy notes is unknown at this time. I shall reconstruct them at a later stage as it has not escaped my notice that there has been some speculation among the greater public on this case. The philologist is buried next to his beloved father in Röcken.
I extend gratitude to my colleagues Drs Madden and Nunneley for sharing with me their notes and manuscripts (listed below) on these edifying cases of liquefaction of the brain; these amply jogged my memory which has become diminished of late.
Krell, David F and Bates, Donald L. (1999) The Good European Nietzsche's Work Sites in Word and Image University Of Chicago Press
Madden, William H. (1850) Illustrations of Diseases of the Nervous System London Journal of Medicine, Vol. 2, No. 13 (Jan., 1850), pp. 10-16
Miller, R. Eric, Richard C. Cambre, Alexander de Lahunta, Roger E. Brannian, Terry R. Spraker, Carol Johnson, William J. Boever Encephalomalacia in Three Black Rhinoceroses (Diceros bicornis). Journal of Zoo and Wildlife Medicine, Vol. 21, No. 2 (Jun., 1990), pp. 192-199
Nunneley, Thomas (1846) Case of Diminished Brain: Provincial Medical and Surgical Journal (1844-1852), Vol. 10, No. 26 pp.297-299.
O'Faoláin, Seán (1938) King of the beggars: a life of Daniel O'Connell, the Irish liberator, in a study of the rise of the modern Irish democracy (1775-1847). The Viking Press.
Thom, Alexander (1906) Ernst Ziegler, M.D., Professor Of Pathology, University Of Freiburg. The British Medical Journal, Vol. 1, No. 2352, pp. 236-237
Ziegler, E (1898) General Pathology. Translated by Aldred Scott Warthin. William Wood and Company
Monday, July 18, 2011
Sunday Morning in a Northeastern Old Growth Forest
God is the experience of looking at a tree and saying, "Ah!"
Most people, who reside in the Northeastern United States, don’t know that there are remains of old growth forests scattered here and there among them. And most don’t care. The human species is not hard-wired to appreciate these things. The people who do appreciate them have a difficult time digesting this, but it’s true. Most people’s world view is a social reality imprinted and reinforced by the way other human beings look at the world. Human beings are social animals and few could survive alone in the wilderness; they’d starve or succumb to the elements. However, most would lose their sanity long before the unforgiving laws of nature would get them. We see this phenomenon in our prisons, where inmates prefer to be out in the yard even if “out in the yard” there are other inmates waiting there to kill them. Being killed by one’s fellows is far more preferable than the worst of fates—solitary confinement. In ancient times the worst thing that could happen to you was banishment.
Natural selection has certainly predisposed human beings to be with other human beings, to gravitate towards other human beings even if they don’t like them, and to see things the way other human beings do because it enhances their survival. Human beings trade reality for social reality. Yes there are differences between people but the differences are minor when compared to the way things are outside of our towns and cities. Anyone who has studied science, for example, knows that the universe doesn’t work—not even remotely—the way that most of human society thinks it does. And this may be a reason why many people have a hard time with science—it violates one’s sense of reality in much the same way that psychoactive drugs like LSD do, by dismantling and reassembling one’s perception of the universe.
Nature can be just as trying. If you are “out there” too long it can alter your state of mind by changing your perception of it. Few people can handle this. But a certain few do, and these folks might have a predisposition or a domain specificity towards nature—the circuitry of their nervous system is geared to specialize in that specific kind of reality. Scientists might be wired differently; naturalists might be wired differently; police might be, and also emergency responders, teachers, morticians, mechanics; each having the generalized social intelligence we all share, while specializing in an area that others know nothing about. But for most of us the idea of back to nature might be a myth. In our past we might have been closer to nature, but we probably were never truly happy living in it as a group.
And for the planet this may be a good thing. Towns and cities, artificial as they are, might have saved the rest of the planet from our kind. If human beings didn’t concentrate in highly populated areas, they would be more uniformly spread out across the continents and human beings are harder on the environment than a herd of elephants is. So cities it is!
Eastern old growth forests are few and far between but they do exist. It’s not fair to compare the trees in them, in size or age, to the impressive stands in the western United States. Redwoods of the Sierra Nevada can be as old as 1,500 to 3,000 years and reach 280 feet. Foxtail White Pines can get even older, though not so impressive in size; some Bristlecones in this family are reputed to be nearly 5000 years old! East Coast species are junior members in this venerable club.
However, when Europeans first came to Northeastern North America they were faced with a sea of old growth forests, which was something they were not quite used to. The first attempts to establish colonies here ended in disaster. With sheer persistence the Puritans succeeded by intensifying the rigidity of their social structure and hugging the coasts. While north and south of them, more adventurous individuals penetrated into the New Hampshire and Pennsylvania wilderness and tried to tame it. Early on the Dutch made forays into Upstate New York but didn’t last. The French were more adaptable befriending certain native tribes of Indians and penetrating deep into the interior—they were a special breed.
The Woodland Indians themselves were closer in spirit to these forests than the Europeans were. But even they kept to their village life most of the time. They slashed and burned the forests and planted fields of corn, beans, and squash; they had orchards that were the envy of their white neighbors. The strongest of these were the Haudenosaunee or commonly called the Iroquois, and they were quite an advanced civilization. They had a sophisticated government, and their extensive roads and trails stretched from the Atlantic to the Great Lakes and from Canada to Pennsylvania. They managed to balance the power between the English and the French, and the numerous other tribes to the north, to the south, to the east and to the west.
The British had been cutting the trees in New England. And the Eastern White Pine was especially coveted. It was said that there was so much White Pine in the Northeast woodlands that a squirrel could spend a squirrel’s lifetime hopping from one branch to another and never reach the end of it. Straight and tall, light and sturdy, relatively weather resistant, the tallest White Pines made the best masts for sailing ships, and England was engaged, at the time, in major conflicts with France—good ships were necessary. And when these were gone the timber was used for just about everything else. Early American was built on white pine, and much of it was exported to the rest of the world as well.
The American settlers didn’t appreciate the French who, along with their Indian allies, would make life exceeding difficult to anyone who had the guts to penetrate and try to tame the interior. But when the French were defeated in the French & Indian War, the Americans became more irritated with the British Government, who wanted the timber for themselves, who demanded first dibs on the cod fisheries, who wanted to control the rum and slave trades—all very lucrative—and the British Government wanted the Americans to pay their taxes to help reimburse the British for that costly war with the French; and the Americans we not willing to do that.
All in all once the Brinish were defeated the Americans again moved into the interior, cutting trees, clearing fields for farmland, and establishing forts and villages. Only the powerful Iroquois stood in their way; but after one skirmish too many Washington lost patience and sent troops in to wipe the Haudenosaunee from the face of the earth. The Sullivan Campaign moved into Iroquoia, burned their villages, chopped down their orchards, and destroyed their fields; and anything else they could find. Without their orchards, without their fields, without their grain stores, the Indians were as helpless as any white man facing the elements of the northeastern forests and the coming winter. The Iroquois either retreated to Canada or faced starvation.
With the Iroquois out of the way pioneers quickly moved into the interior, at first hunting, fishing, and trapping, then logging and farming. A lot of timber was burned simply to make charcoal and potash or roof shingles. Virgin soils were farmed, depleted, and then the farms abandoned. Much of New England is forests that have taken over and reclaimed abandoned farmland.
By the turn of the century, most of the virgin timber, as far west as Minnesota, had been cut. Clear-cutting continued into the 1950s and today, we are left with juvenile forests—unhealthy ecosystems infested with disease.
Eastern forests are reviving, but it will be centuries before they become as they once were. The Appalachians are aggressive mountains. Time and again people move in from the city, cut everything down, bulldoze out a driveway and plant a big lawn; they put up a pool, and try to grow a lot of exotics. They display their plastic pink flamingos and ride their expensive lawn mowers in the pursuit of the American Dream. Ah, social reality. Give or take a decade our cozy family is divorced or deceased, or worse—surrendered to the forest. The yard is unkempt and the forest is back. The natural state of the Northeast is forest.
But for now I live in the city and the fight goes on. I share my city with deer, possums, skunks, ground hogs, squirrels, birds—critters who have gotten used to this—and people. Every summer without fail some guy with a little too much testosterone goes out and rents a chainsaw and the cutting ritual begins again. This year my neighbors cut down a beautiful 75-year-old maple, so they could set fireworks off on the Fourth of July—sigh. So now I need to get out of here for awhile and spend my Sundays lying on a carpet of pine needles, listening to the sweet sounds of a thrush welcoming the morning and stare up at a 150 foot tree in an old growth stand and contemplate. Was this a seedling when Shakespeare wrote Hamlet? Was it 100 years old before the first white setters ever made it to these parts?
Monday, July 11, 2011
Babies, Breast Milk, and Bifidobacteria
by Meghan Rosen
Earlier this year, a London ice cream parlor debuted an attention-grabbing new flavor that made headlines around the world and sold out within days. The flavor, Baby Gaga, was infused with Madagascan vanilla and lemon zest and served in a martini glass chilled with liquid nitrogen. But at over $22 a serving, customers weren’t coming for its gourmet spices or upscale presentation; they were coming for its star ingredient, its claim to fame: human breast milk.
Just a week after giving birth, women who exclusively breastfeed produce, on average, more than 500 milliliters of milk per day. In parlor measurements, that’s about a pint of liquid. At 6 weeks, this amount has typically increased by about 50%; in some highly productive women, it can even double. For women with an abundant supply, excess milk can be drawn out with an electric pump and stored for future consumption (by baby, or in London, by high-paying ice cream connoisseurs.)
In an interview with the Daily Mail, the London parlor’s proprietor played up the novelty of his new flavor, but his description of its taste (‘creamy and rich’) was comfortably familiar. Flavor-wise, how does milk from humans compare to milk from cows? Can you even taste a difference? I don’t live in London, but I do have an ice cream maker. It’s in my freezer, right next to 2 liters of frozen breast milk.
Three weeks after the birth of my daughter, nightly pumping sessions left me with an unexpected, but not altogether unwelcome problem: I ran out of bottles to store milk in. (It’s not uncommon for women to produce too much or too little milk; it often takes weeks to establish a supply that matches the baby’s appetite.) After moving on to glass jars, ice cube trays, and finally, proper storage baggies, I had amassed enough milk to make more than 100 servings of ice cream (following Baby Gaga recipe proportions).
As bodily fluids go, breast milk is not an unlikely candidate for dessert innovation. After all, the most abundant component is sugar; the next is fat. Those two ingredients are about all you need to make a tasty frozen treat, and since a mother’s milk is steeped in the flavors, smells, and colors of what she eats, additives may even be unnecessary. A garlicky dinner, for example, predictably changes the taste of human breast milk, and babies tend to like it. One study even found that babies preferred their mother’s garlic-imbued milk to milk that was garlic-free.
After sugar and fat, the third most common component of human breast milk is not what you might think: it’s not protein, it’s not vitamins, in fact, it’s not even digestible by babies. Human milk includes a hefty proportion of molecules called oligosaccharides, or HMOs, (essentially long chains of simple sugars linked together in different conformations) that travel from the mother’s breast to the infant’s mouth and pass right on through its digestive tract.
Until recently, scientists considered these compounds just a bulky byproduct of lactation; after all, if it didn’t directly provide nutrition for the baby, what could it be good for?
But making milk isn’t free; for the mother, it’s actually quite expensive. It takes about 500 calories to fill and continually restock the breasts with a baby’s daily nourishment. Typically, a woman will burn fat stored during pregnancy or simply increase her food intake to meet the demand, but if she’s not getting enough nutrients, her body will tap into its own emergency reserves (like her bones or her teeth) to provide the baby with what it needs.
Rich milk makes for chubby, healthy babies, and healthy babies have a greater chance at survival, but it’s a finely balanced system: take too much from the mother and her health may be at risk. If every part of the milk comes at a cost, it’s unlikely that any part would be extraneous (especially those that are most abundant). Why waste the calories?
The initial understanding of HMOs wasn’t exactly wrong – babies can’t use the long chains of sugar as a source of nutrition – but it was missing one key point: other organisms can. HMOs may be indigestible by humans, but they’re the perfect food source for bacteria: in particular, Bifidobacterium longum infantis, a species that’s specialized to live in a baby’s gut.
Researchers at UC Davis have shown that bifidobacteria have a unique set of genes that is particularly suited for allowing growth in an infant’s intestine, where HMOs are abundant. Their work, profiled in the NY Times last year, helps explain why humans may have evolved to invest so heavily in a milk ingredient that is, for us, inedible.
Because bifidobactera thrive on HMOs, they have a leg up on other, less benevolent bacteria that are also clamoring for a home in the intestine. The well-fed bifidobacteria crowd out potential pathogens, effectively protecting the baby from infections. Breastfed babies tend to have fewer intestinal diseases and less constipation than their formula-fed counterparts: much of this is attributed to a gut full of beneficial bacteria living in harmony with their newborn human host.
Besides cultivating a community of ‘good’ intestinal bacteria, HMOs are also thought to trick ‘bad’ bacteria by mimicking the cells lining a baby’s gut. Instead of attaching to the baby’s cells and sneaking past its defenses to start an infection, pathogens bind to HMOs (which are replenished every time the baby nurses) and are flushed out with the waste.
Breast milk is tailor made for guarding a baby’s newly developing immune system, (according to the World Health Organization, it’s the best thing parents can feed their infants) and many people are willing to pay a premium for it. For mothers with milk supply problems, there’s an unregulated, craigslist-style market where human breast milk can fetch more than $2.50 an ounce, and women advertise their milk as ‘organic’, ‘vegetarian’, and, ‘free-range’. (The FDA does not approve.)
Human milk is a hot commodity, and not just for new parents. At OnlyTheBreast.com, among buyer listings for ‘Local Milk’ and ‘Special Diet Milk’, there’s also a category for ‘Men Buying Milk’. (As of today, there were 17 buyers.)
Although breast milk is the gold standard for baby food, its cost can be prohibitive (unless you are making your own, human milk is much more expensive than formula), and its quality is not guaranteed (infectious diseases can be passed through milk, and there’s no screening in place to protect potential buyers). Current formula alternatives attempt to imitate human milk, but lack the immune-protective benefits and bacterial-promoting pre-biotics (like HMOs).
It might be possible, however, to create a more milk-like formula by studying human breast milk; this could give premature infants (whose mothers’ milk often takes longer to come in) a healthier start to life. Donated milk, however, is in short supply for milk-banks, and even shorter supply for research. A lactation consultant at UC Davis told me milk researchers on campus were always thrilled to receive human milk donations because they're not easy to come by. Unless, like me, you happen to have a freezer full of them. And live in Davis. For now, I think home made ice cream may have to wait.