February 04, 2013
The Science Mystique
by Jalees Rehman
Many of my German high school teachers were intellectual remnants of the “68er” movement. They had either been part of the 1968 anti-authoritarian and left-wing student protests in Germany or they had been deeply influenced by them. The movement gradually fizzled out and the students took on seemingly bourgeois jobs in the 1970s as civil servants, bank accountants or high school teachers, but their muted revolutionary spirit remained on the whole intact. Some high school teachers used the flexibility of the German high school curriculum to infuse us with the revolutionary ideals of the 68ers. For example, instead of delving into Charles Dickens in our English classes, we read excerpts of the book “The Feminine Mystique” written by the American feminist Betty Friedan.
Our high school level discussion of the book barely scratched the surface of the complex issues related to women’s rights and their portrayal by the media, but it introduced me to the concept of a “mystique”. The book pointed out that seemingly positive labels such as “nurturing” were being used to propagate an image of the ideal woman, who could fulfill her life’s goals by being a subservient and loving housewife and mother. She might have superior managerial skills, but they were best suited to run a household and not a company, and she would need to be protected from the aggressive male-dominated business world. Many women bought into this mystique, precisely because it had elements of praise built into it, without realizing how limiting it was to be placed on a pedestal. Even though the feminine mystique has largely been eroded in Europe and North America, I continue to encounter women who cling on to this mystique, particularly among Muslim women in North America who are prone to emphasize how they feel that gender segregation and restrictive dress codes for women are a form of “elevation” and honor. They claim these social and personal barriers make them feel unique and precious.
Friedan’s book also made me realize that we were surrounded by so many other similarly captivating mystiques. The oriental mystique was dismantled by Edward Said in his book “Orientalism”, and I have to admit that I myself was transiently trapped in this mystique. Being one of the few visibly “oriental” individuals among my peers in Germany, I liked the idea of being viewed as exotic, intuitive and emotional. After I started medical school, I learned about the “doctor mystique”, which was already on its deathbed. Doctors had previously been seen as infallible saviors who devoted all their time to heroically saving lives and whose actions did not need to be questioned. There is a German expression for doctors which is nowadays predominantly used in an ironic sense: “Halbgötter in Weiß” – Demigods in White.
Through persistent education, books, magazine and newspaper articles, TV shows and movies, many of these mystiques have been gradually demolished.It has become common knowledge that women can be successful as ambitious CEOs or as brilliant engineers. We now know that “Orientals” do not just indulge their intuitive mysticism but can become analytical mathematicians. People readily accept the fact that doctors are human, they make mistakes and their medical decisions can be influenced by pharmaceutical marketing or by spurious squabbles with colleagues. One of my favorite TV shows was the American medical comedy Scrubs, which gave a surprisingly accurate portrayal of what it meant to work in a hospital. It was obviously fictional and contained many exaggerations to increase its comedic impact, but I could relate to many of the core themes presented in the show. The daily frustrations of being a physician-in-training or a senior attending physician, the fact that physicians make mistakes, the petty fights among physicians that can negatively impact their patients, the immense stress of having to deal with patients who cannot be helped, financial incentives, physicians and nurses with substance abuse problems – these were all challenges that either I or my friends and colleagues had experienced.
One lone TV show such as Scrubs cannot be credited for taking down the “doctor mystique”, but it did provide a vehicle for us physicians to talk about the “dark side of medicine”. Speaking about flawed clinical decision-making and how personal emotions can affect our interactions with patients is not easy for physicians, because this form of introspection can lead to paralyzing guilt. All physicians know they make mistakes, and even though we ourselves do not buy into the “doctor mystique”, we may still feel the burden of having live up to it. I remember how I used to discuss some of the Scrubs episodes with other physicians and these light-hearted conversations about funny scenes in the TV show sometimes led to deeper discussions about our own personal experiences and the challenges we faced in our profession.
Being placed on a pedestal is a form of confinement. Dismantling mystiques not only liberates the individuals who are being mystified, but it can also benefit society as a whole. In the case of the doctor mystique, patients are now more likely to question the decisions of physicians, thus forcing doctors to explain why they are prescribing certain medications or expensive procedures. The internet enables patients to obtain information about their illnesses and treatment options. Instead of blindly following doctors’ orders, they want to engage their doctor in a discussion and become an integral part of the decision-making process. The recognition that gifts, free dinners and honoraria paid by pharmaceutical companies strongly influence what medications doctors prescribe has led to the establishment of important new rules at universities and academic journals to curb this influence. Many medical schools now strongly restrict interactions between pharmaceutical company representatives and physicians-in-training. Academic journals and presentations at universities or medical conferences require a complete disclosure of all potential financial relationships that could impact the objectivity of the presented data. Some physicians may find these regulations cumbersome and long for the “mystique” days when their intentions were not under such scrutiny, but many of us think that these changes are making us better physicians and improving medical care.
As I watch many of these mystiques crumble, one mystique continues to persist: The Science Mystique. As with other mystiques, it consists of a collage of falsely idealized and idolized notions of what science constitutes. This mystique has many different manifestations, such as the firm belief that reported scientific findings are absolutely true beyond any doubt, scientific results obtained today are likely to remain true for all eternity and scientific research will be able to definitively solve all the major problems facing humankind. This science mystique is often paired with an over-simplified and reductionist view of science. Some popular science books, press releases or newspaper articles refer to scientists having discovered the single gene or the molecule that is responsible for highly complex phenomena, such as a disease like cancer or philosophical constructs such as morality. I was recently discussing a recent paper on wound healing and I came across an intriguing comment in a public comment thread: “When I read an article related to science it puts me in the mindset of perfection and credibility”. This is just one anecdotal comment, but I think that it captures the Science Mystique held by many non-scientists who place science on a pedestal of perfection.
As flattering as it may be, few scientists see science as encapsulating perfection. Even though I am a physician, most of my time is devoted to working as a cell biologist. My laboratory currently studies the biology of stem cells and the role of mitochondrial metabolism in stem cells. In the rather antiquated division of science into “hard” and “soft” sciences, where physics is considered a “hard” science and psychology or sociology are considered “soft” sciences, my field of work would be considered a middle-of-the-road, “firm” science. As cell biologists, we are able to conduct well-defined experiments, falsify hypotheses and directly test cause-effect relationships. Nevertheless, my experience with scientific results is that they are far from perfect and most good scientific work usually raises more questions than it provides answers. We scientists are motivated by our passion for exploration, and we know that even when we are able to successfully obtain definitive results, these findings usually point out even greater deficiencies and uncertainties in our knowledge. Stuart Firestein’s wonderful book “Ignorance: How It Drives Science” is a sincere and eloquent testimony to the key role of ignorance in scientific work. A thoughtful “I do not know the answer to this” uttered by a scientist is typically seen as a sign of scientific maturity, because it shows humility of the scientist and indicates a potential new direction for scientific research. On the other hand, when a scientist proudly proclaims to have found the most important gene or having defined the most important pathway for a certain biological process, it frequently indicates a lack of understanding of the complexity of the matter at hand.
One key problem of science is the issue of reproducibility. Psychology is currently undergoing a soul-searching process because many questions have been raised about why published scientific findings have such poor reproducibility when other psychologists perform the same experiments. One might attribute this to the “soft” nature of psychology, because it deals with variables such as emotions that are difficult to quantify and with heterogeneous humans as their test subjects. Nevertheless, in my work as a cell biologist, I have encountered very similar problems regarding reproducibility of published scientific findings. My experience in recent years is that roughly only half the published findings in stem cell biology can be reproduced when we conduct experiments according to the scientific methods and protocols of the published paper.
This estimate of 50% reproducibility is not a comprehensive analysis. We only attempt to replicate findings which are highly relevant to our work and which are published in a select group of scientific journals. If we tried to replicate every single paper in the field of stem cell biology, the success rate might be even lower. On the other hand, we devote a limited amount of time and resources to replicating results, because there is no funding available for replication experiments. It is possible that if we devoted enough time and resources to replicate a published study, tinkering with the different methods, trying out different batches of stem cells and reagents, we might have a higher likelihood of being able to replicate the results. Since negative studies are difficult to publish, these failed attempts at replication are buried and the published papers that cannot be replicated are rarely retracted. When scientists meet at conferences, they often informally share their respective experiences with attempts to replicate research findings. These casual exchanges can be very helpful, because they help us ensure that we do not waste resources to build new scientific work on the shaky foundations of scientific papers that cannot be replicated.
In addition to knowing that a significant proportion of published scientific findings cannot be replicated, scientists are also aware of the fact that scientific knowledge is dynamic. Technologies used to acquire scientific data are continuously changing and the new scientific data amassed during any single year by far outpaces the capacity of scientists to fully understand and analyze it. Most scientists are currently struggling to keep up with the new scientific knowledge in their own field, let alone put it in context with the existing literature. As I have previously pointed out, more than 30-40 scientific papers are published on average on any given day in the field of stem cell biology. This overwhelming wealth of scientific information inevitably leads to a short half-life of scientific knowledge, as Samuel Arbesman has expressed in his excellent book “The Half-Life of Facts”. What is considered a scientific fact today may be obsolete within five years. The books by Firestein and Arbesman are shining examples among the plethora of recent popular science books, because they explain why scientific knowledge is so ephemeral and yet so important. Hopefully, these books will help deconstruct the Science Mystique.
One aspect of science that receives comparatively little attention in popular science discussions is the human factor. Scientific experiments are conducted by scientists who have human failings, and thus scientific fallibility is entwined with human frailty. Some degree of limited scientific replicability is intrinsic to the subject matter itself. A paper on cancer cells published by one group of researchers may use a different set of cancer cells obtained from their patients than those available to other researchers. At other times, researchers may make unintentional mistakes in interpreting their data or may unknowingly use contaminated samples. One can hardly blame scientists for heterogeneity of their tested samples or for making honest errors. However, there are far more egregious errors made by scientists that have a major impact on how science is conducted. There are cases of outright fraud, where researchers just manufacture non-existent data, but these tend to be rare and when colleagues and scientific journals or organizations become aware of these cases of fraud, published papers are retracted and scientists face punitive measures. Such overt fraud tends to be unusual, and of the hundred or more scientific colleagues who I have personally worked with, I do not know of any one that has committed such fraud. However, what occurs far more frequently than gross fraud is the gentle fudging of scientific data, consciously or subconsciously, so that desired scientific results are obtained. Statistical outliers are excluded, especially if excluding them helps direct the data in the desired direction. Like most humans, scientists also have biases and would like to interpret their data in a manner that fits with their existing concepts and ideas.
Human fallibility not only affects how scientists interpret and present their data, but can also have a far-reaching impact on which scientific projects receive research funding or the publication of scientific results. When manuscripts are submitted to scientific journals or when grant proposal are submitted to funding agencies, they usually undergo a review by a panel of scientists who work in the same field and can ultimately decide whether or not a paper should be published or a grant funded. One would hope that these decisions are primarily based on the scientific merit of the manuscripts or the grant proposals, but anyone who has been involved in these forms of peer review knows that, unfortunately, personal connections or personal grudges can often be decisive factors.
Lack of scientific replicability, knowing about the uncertainties that come with new scientific knowledge, fraud and fudging, biases during peer review – these are all just some of the reasons why scientists rarely believe in the mystique of science. When I discuss this with acquaintances who are non-scientists, they sometimes ask me how I can love science if I have encountered these “ugly” aspects of science. My response is that I love science despite this “ugliness”, and perhaps even because of its “ugliness”. The fact that scientific knowledge is dynamic and ephemeral, the fact that we do not need to feel embarrassed about our ignorance and uncertainties, the fact that science is conducted by humans and is infused with human failings, these are all reasons to love science. When I think of science, I am reminded of the painting “Basket of Fruit” by Caravaggio, which is a still-life of a fruit bowl, but unlike other still-life paintings of fruit, Caravaggio showed discolored and decaying leaves and fruit. The beauty and ingenuity of Caravaggio’s painting lies in its ability to show fruit how it really is, not the idealized fruit baskets that other painters would so often depict.
The challenge that we scientists face is to share our love for science despite its imperfections with those around us who do not actively work in the field of science. I remember speaking to a colleague of mine in the context of a wonderful spoof of a Lady Gaga song called “Bad Project”. We both agreed that the spoof was spot on, showing frustrations of a PhD student not being able to get experiments to work, having to base experiments on poorly documented lab note books and the tedious nature of scientific work. My colleague was concerned that if such spoofs ridiculing laboratory work became too common, it would embolden the American anti-science movement that is already very strong. Anyone who closely follows American science politics knows that creationists and global-warming deniers are constantly looking for opportunities to find any flaws in scientific studies and that they use rare occasional errors as opportunities to suggest that well-established and replicated scientific results or theories should be discarded. In addition to the agenda of these specific anti-science interest groups, there are also many groups lobbying for severe budget cuts, many which would negatively impact US research funding, which is already at an alarmingly low level.
My response to these concerns is that it is our job as scientists to convince fellow citizens how important science is, despite its limitations and flaws. The fact that scientists recognize the uncertainties and limitations of scientific knowledge is not a weakness, but a strength of the scientific approach and makes it ideally suited to help us understand our world. Enabling a false mystique of science as being definitive and perfect is not going to benefit science or society in the long run. Instead, recognizing our failings and limitations in science and openly discussing them with our fellow citizens is going to help us improve how we conduct science. I think that anyone who carefully looks at Caravaggio’s “imperfect” painting eventually sees its beauty and falls in love with it. I hope that we scientists will be able to share the Caravaggio view of science with the general public.
Image Credits: Painting Basket of Fruit by Caravaggio via Wikimedia Commons
Ecology’s Image Problem
“There are tories in science who regard imagination as a faculty to be avoided rather than employed. They observe its actions in weak vessels and are unduly impressed by its disasters” —John Tyndall, 1870
In his 1881 essay on Mental Imagery, Francis Galton noted that few Fellows of the Royal Society or members of the French Institute, when asked to do so, could imagine themselves sitting at the breakfast-table from which presumably they had only recently arisen. Members of the general public, women especially, fared much better, being able to conjure up vivid images of themselves enjoying their morning meal. From this Galton, an anthropologist, noted polymath, and eugenicist, concluded that learned men, bookish men, relying as they do on abstract thought, depend on mental images little, if at all.
In this rejection of the scientific role for the imagination Galton was in disagreement with Irish physicist John Tyndall who in a 1870 address to the British Association in Liverpool entitled The Scientific Use of the Imagination claimed that in explaining sensible phenomena, scientists habitually form mental images of that which is beyond the immediately sensible. "Newton’s passage from a falling apple to a falling moon”, Tyndall wrote, “was, at the outset, a leap of the prepared imagination.” The imagination, Tyndall claimed, is both the source of poetic genius and an instrument of discovery in science.
The role of the imagination is chemistry, is well enough known. In 1890 the German Chemical Society celebrated the discovery by Friedrich August Kekulé von Stradonitz of the structure of benzene, a ring-shaped aromatic hydrocarbon. At this meeting Kekulé related that the structure of benzene came to him as a reverie of a snake seizing its own tail (the ancient symbol called the Ouroboros).
Since this is quite a celebrated case of the scientific use of the imagination I quote Kekule’s account of the events in full:
“During my stay in Ghent, Belgium, I occupied pleasant bachelor quarters in the main street. My study, however, was in a narrow alleyway and had during the day time no light. For a chemist who spends the hours of daylight in the laboratory this was no disadvantage. I was sitting there engaged in writing my text-book; but it wasn't going very well; my mind was on other things. I turned my chair toward the fireplace and sank into a doze. Again the atoms were flitting before my eyes. Smaller groups now kept modestly in the background. My mind's eye, sharpened by repeated visions of a similar sort, now distinguished larger structures of varying forms. Long rows frequently close together, all, in movement, winding and turning like serpents! And see! What was that? One of the serpents seized its own tail and the form whirled mockingly before my eyes. I came awake like a flash of lightning. This time also [he had had fruitful dreams before] I spent the remainder of the night working out the consequences of the hypothesis. If we learn to dream, gentlemen, then we shall perhaps find truth…” Berichte der deutschen chemischen Gesellsehaft, 1890, 1305-1307 (in Libby 1922).
In supporting his argument about the positive role of the imagination John Tyndall quoted Sir Benjamin Brodie, the chemist, who wrote that the imagination (”that wondrous faculty”) when it is “properly controlled by experience and reflection, becomes the noblest attribute of man”. Brodie cautioned, however, that the imagination when “left to ramble uncontrolled, leads us astray into a wilderness of perplexities and errors…”
The philosopher Vigil Aldrich provided an interesting example of how imagination could be a hindrance to science. Sir Arthur Stanley Eddington, the English astrophysicist, referred frequently, according to Aldrich, to “the world outside us”. Consciousness, in contrast, can be described as being “inside of us.” Using such images Eddington was, said Aldrich, “under the spell of the telephone-exchange analogy.” Where the nerve ending leave off the world beyond us takes over. If the telephone exchange image seems ill-chosen, the image, after all, could be worse. One might imagine inner consciousness as a submarine and from our berth within it we come to know the outside world by means of a periscope! Now, Eddington did not use this image (others did) but when we try to make sense of it we can do so only by saying that inner consciousness is like a submarine only when one supposes that it is nothing at all like a submarine. One must “tone down the analogy” to make it useful. If you do otherwise “the lively imagination begins to protest”. Aldrich speculated that theorists persists with inept picture-making because when toned down, it often appeared as if the image is illuminating even when it is not. Moreover, a flashy image is entertaining. Thus one can easily make the “pleasant mistake” of identifying the image with the “real meaning” of an assertion.
A strength of environmental disciplines is that they bring into proximity bodies of knowledge that are often set apart. Though some quibble with him on this, historian of ecology Donald Worster places both Charles Darwin, the philosophical scientist and Henry David Thoreau the scientific philosopher at the ground of ecology as a natural scientific discipline. And though it is fair to say that ecology has maintained an identity largely separate from the environmentalisms it has inspired, nevertheless ecology and environmentalisms have been good conversation partners. Both have listened to an admirable degree to its poets, artists and philosophers. A good thing this may be in many ways, but my contention here is that the environmental sciences and the practices associated with them — environmentalisms like sustainability — are prone of taking their most arresting images too literally. I wonder if there is not in environmental thought a pathology of the imagination? Too readily, it seems, we transform a provocative image into a proven hypothesis; we smuggle ancient and baffling worldviews into contemporary conceptions of nature.
I sketch a few examples here to illustrate the case. Perhaps you will have ones that you can add.
Nature as an Organism
You are justified in calling Nature your Mother if you have a mother who wants you dead. A Mother who inculcated both your limitations and your accomplishments. Nature: A Mother who birthed a world equipped with tooth and nail and hungry eye; whose family tie is the ripping of flesh. Why, I wonder, are we quick to demand of God an explanation of evil but incline less to asking that question of Mother Nature?
To call Nature our mother is just one manifestation of the image of the Earth as organism. It is enduring, compelling and surely wrong-footing.
University of Wisconsin historian Frank N. Egerton traces the myth of cosmos as organism back to Plato. Timaeus asked “In the likeness of what animal did the Creator make the world?” He then speculated as follows: “For the Deity, intending to make this world like the fairest and most perfect of intelligible beings, framed one visible animal comprehending within itself all other animals of a kindred nature.” Because of Plato’s fateful influence on the history of western thought, Egerton noted that the implications of this myth have been enduring. According to Egerton the myth is the source of two related concepts “the supraorganismic balance-of-nature concept and the microcosm-macrocosm concept.” The supraorganismic concept views the cosmos as having the attributes of a living thing whereas the microcosm-macrocosm concept takes different parts of the universe to correspond with an organismal body.
Both flavors of the organismal concept get expressed in ecosystem ecology. Natural ecosystems, the influential University of Georgia ecology Eugene Odum asserted, are integrated wholes, and developed in a manner that parallels the development of individual organisms or human societies. The development of the natural systems, ecological succession in other words, is orderly, predictable, and directional. It leads, in Odum’s view of things, to a stabilized ecosystem with predictable ratios of biomass, productivity, respiration and so forth. The “strategy” of ecosystem development, as Odum called it, corresponds to the “strategy” for long-term evolutionary development of the biosphere – “namely, increased control of, or homeostasis with, the physical environment in the sense of achieving maximum protection from its perturbations.” Homeostasis etymologically derives from the Greek “standing-still” and in the sense that Odum meant to imply, indicates a dynamic and regulated stability. In other words, the stability of the organism.
Odum does not stand here accused of covertly importing the organismal image into his work; he was quite explicit about it. There is much to admire in Odum’s work and the ecology that he inspired, but the sense of design and purpose that it implied in nature (what philosophers call teleology) put Odum's ecosystem ecology at loggerheads with contemporary evolutionary theory which insists on the purposelessness of nature. It has taken quite some time to reconcile ecosystem thought with evolutionary theory.
Another example of the superorganism’s baleful influence can be found in the Gaia hypothesis. In his preface to Gaia: A New Look at Life on Earth (1979) Lovelock wrote:
“The concept of Mother Earth or, as the Greeks called her long ago, Gaia, has been widely held throughout history and has been the basis of a belief which still coexists with the great religions."
If the development of James Lovelock and Lynn Margulis’s Gaia hypothesis is anything to go by, hypotheses about the workings of nature derived from the organismal image of nature have a shelf life of a decade or so. Lovelock’s Gaia: A New Look at Life on Earth was published in 1979 and he rescinded the teleological claims of the Gaia hypothesis by 1988 in his book Ages of Gaia — or at least he became attentive to the problems that the superorganism concept created. He still maintains that the Earth’s atmosphere is homeostatically regulated but he admitted to not having been led astray by the sirens of the superorganism.
It is a banality of the ecological sciences to state that everything is connected. That ebullient Scot, and eventual stalwart of the American wilderness movement, John Muir, provided the image. He wrote, "When we try to pick out anything by itself, we find it hitched to everything else in the universe."
And if such statements are employed to sponsor a notion that individual organisms cannot be regarded in isolation from those that they consume, and those that can consume them, or furthermore, that as a consequence of the deep intersections of the living and the never-alive, that there can been unforeseen consequences flowing from species additions or removals from ecosystems, then few may argue with this. However, just as the ripples of a stone dropped in a still pond propagate successfully only to its edges (though they may entrain delightful patterns in the finest of its marginal sands), not every ecological event has intolerably large costs to exact. True, if the dominoes line-up and the circumstances are just so, a butterfly’s wing beat over the Pacific may hurl a typhoon against its shores, but more often than not such lepidopterous catastrophes do not come to pass.
Ecosystems, energized so that matter cycles and conjoins the living with the dead, have their lines of demarcation, borders defined by their internal interactions being more powerful than their external ones. They are therefore buffered against many potentially contagious disasters. This, of course, is the essence of resilience - the capacity of a system to absorb disturbance without disruption to habitual structure and function. Ecology is as much the science investigating the limits of connections as it is the thought that everything is connected.
The Community Concept
Is there a greater 20th Century American environmental thinker than Aldo Leopold? Certainly there few that provided as many genuinely poetic images: in the eyes of a dying wolf he saw “a fierce green fire”, he exhorted us to “think like a mountain”, he depicted the crane as “wilderness incarnate”. For all of that, has Leopold not led us astray, with images associated with of the “ethical sequence”? Leopold’s influential land ethic “enlarges the boundaries of the community concept.” The ethical sequence that he proposed progresses stutteringly from free men, to women, to slaves, to animals, plants, rocks and land. It has a compelling lucidity. Leopold admitted, however, that it seems a little too simple. The ethic invites us into community with the land. A person’s self-image will change under a land ethic: “In short,” Leopold writes “a land ethic changes the role of Homo sapiens from conqueror plain member and citizen of it.”
Now, Leopold is a subtle thinker and knows not to confuse the image with the thing. Certainly he expected this transformation to take quite some time. The land ethic would not emerge without “an internal change in our intellectual emphases, loyalties, affections, and convictions.” Now I have little problem with the image of extending the ethical circle other than noting that it makes it seem easier than it has proven to be. My more serious objection concerns the rather thin notion of community that seems to be implied in Leopold image of the plain citizen. As environmental philosopher William Jordan III has illustrated in his book The Sunflower Forest (2003), missing from Leopold’s account is any acknowledgment of the negative elements of the human experience of community: envy, selfishness, fear, hatred, and shame. As Jordan pointed out this leads Leopold and others to “a sentimental, moralizing philosophy that…insists on the naturalness of humans…but that neglects or downplays the radical difficulty of achieving such a sense of self, and also downplays the role of culture and cultural institutions in carrying out this work.” If Leopold’s image of the community and our place within it is an impoverished one, the work of extending the circle becomes impossible.
There are other images that we might have discussed here. Ones that have had, at times at least, unfortunate implications for environmental thinking. For instance, in 1864 George Perkins Marsh wrote that mankind is disruptive, not just occasionally, mind you, but “is everywhere a disturbing agent.” One hundred years later the Wilderness Act renews the image in the definition of wilderness as an area “untrammeled by man.” We might have considered contemporary accounts of social-ecological systems where these systems are posited as a compound substance, but that in depicting them, we tease the components apart again.
So, if environmental thought and ecological science has been susceptible to what my colleague and friend Professor David Wise of University of Illinois, Chicago, has called “malicious metaphors”, is there a more productive way to think about the role of the image in developing environmental thought?
The work of French philosopher Gaston Bachelard (1884 - 1862) — one of the more lovable of the French phenomenologists, certainly the hairiest — is helpful in sorting out of a productive role for the imagination in science. He was renowned for his work on epistemological issues in science as well as for his phenomenological account of the poetic image, and his philosophical meditation on reverie. As much as he was a materialist in his approach to science, he was subjective and personal (as a matter of theoretical orientation) in his philosophical work on the imagination.
Bachelard’s work on first glance is so inviting. Chapters in his book The Poetics of Space (1958) have enticing titles like The House from Cellar to Garret, Nests, Shells. Perhaps this is why the book is a philosophic bestseller. My copy claims “more than 80,000 copies sold”. And though indeed opening a Bachelard book is like relaxing into a warm bath, nevertheless there is an astringent in those waters. The thought is somewhat obscure as Bachelard ransacks the lexicon of the various disciplines he brings together in his work: Kantian philosophy, Husserlian phenomenology, Jungian psychoanalysis etc. Oftentimes his use of technical terms was novel; reinterpreting them, Bachelard pushed them into new service. Because of this density, I wonder how many of those 80,000 copies have languished on bookshelves? Mine certainly did until the past few weeks.
To enjoy the fruits of Bachelard’s insights we should do at least some of the work of appreciating how he produced them. In the hope that this will embolden you to return to your copy of The Poetics of Space, or other works by Bachelard on the imagination, or pick them up for the first time, I will give a summary, as best I understand it, of what his phenomenology of the image is all about. I am, I should tell you, strictly an amateur Bachelardian.
The poetic image is eruptive for both poet and reader. Bachelard say that for its creation “the flicker of the soul is all that is needed.” So, every great image is its own origin. Famously, Bachelard maintained that the imagination, contrary to view of many philosophical accounts, is “the faculty of deforming images offered by perception.” The poetic image emerges into the consciousness as a direct product of “the heart, soul and being of man.” Elsewhere Bachelard claims “the imagination [is] a major power of the human nature.”
The poetic image is therefore not caught up in a network of causalities. Our first recourse should not be to ask what archetypes an image represents, or what aspects of the poet’s psycho-biography explains it away. In this assertion Bachelard remains true to phenomenology’s maxim of going “back to the things themselves.” In as much as such things are possible, one approaches the poetic image freed from all presuppositions.
So it is of secondary importance to ask where an artistic image comes from; what matters more is to explore what opportunities for freedom an image creates. Instead of cause and effect, at the center point of which we traditionally ask the image to stand, rather we might speak of the “resonances and reverberations” of the image. This is not, I think, just some fanciful softening of language, it is a necessary acknowledgment of the way in which an image does not simply reflect a memory, but rather revives an absent one and the way in which an image explodes into images. When we read the poetic image it resonates, when we communicate it it reverberates. The repercussions of the image, said Bachelard, “invite us to give greater depth to our own existence.” What bearing does an image have on our freedom? A great piece of art, Bachelard says “awakens images that have been effaced, at the same time that it confirms the unforeseeable nature of speech. And if we render speech unforeseeable, is this not an apprenticeship to freedom?”
I propose that Gaston Bachelard’s phenomenological account of the poetic image, despite its somewhat unpromising obscurity, is helpful in addressing environmental thought’s special porousness to striking images. In this short sketch I cannot fully substantiate the claim. I will end, however, with an example where an approach such as Bachelard’s seems to have been fruitful.
Tim Morton is one of the most widely read and exciting environmental writers of recent years. As far as I know has not cited Bachelard as a methodological inspiration, although his work is phenomenological and existential. [Added: One of Morton's earlier books on the representation of the spice trade in Romantc Literature was entitled Poetics of Spice (2006) - making him, it would seem, an explicit Bachelardian after all!]. Morton is so concerned about the potential of sedimented ideas leading us into Sir Benjamin Brodie’s “wilderness of perplexities and errors”, that he elected to drop the term “Nature” altogether. In his book Ecology Without Nature (2007) he explained the problem: “…the idea of nature is getting in the way of properly ecological forms of culture, philosophy, politics, and art.”
The results of Morton’s analysis lead us to strange, perplexing, though ultimately interesting places. Out of this natureless ecology comes a suite of insights on “dark ecology”, an ecology reminding us that we are always already implicated in the ecological. There is no outside from which we get a guilt-free view of the fantastic mess. Deriving also from an ecology developed without a sentimental view of nature comes a fresh analysis of connectedness. Morton revives Muir’s hitching image but this time its resonances are weirder than the oceanic feeling that we are all blissfully in this together. His analysis gives us the queer bestiary of “strange strangers” with which we are stickily intimate, and yet we can never fully get to know. Morton develops this account in The Ecological Thought (2010) which I recommend to you. I am not supposing that this is an adequate summary of Morton’s recent books, but I think that Tim is converging on the idea of resonances and reverberations that Bachelard has written about.
The image, and the imagination, can play a positive role in environmental thinking. Darwin’s image of the “tangled bank” is both a pretty and useful way of thinking about the way in which the organismal profusion developed from a common ancestor. But a misapplied image can be a disaster. Understanding our responsibilities with respect to the image is the work of the future, it is the work that will birth the future.
Walter Libby The Scientific Imagination The Scientific Monthly, Vol. 15, No. 3 (Sep., 1922), pp. 263-270
January 07, 2013
A Parched Future: Global Land and Water Grabbing
by Jalees Rehman
“This is the bond of water. We know the rites. A man’s flesh is his own; the water belongs to the tribe.” Frank Herbert - Dune
Land grabbing refers to the large-scale acquisition of comparatively inexpensive agricultural land in foreign countries by foreign governments or corporations. In most cases, the acquired land is located in under-developed countries in Africa, Asia or South America, while the grabbers are investment funds based in Europe, North America and the Middle East. The acquisition can take the form of an outright purchase or a long-term-lease, ranging from 25 to 99 years, that gives the grabbing entity extensive control over the acquired land. Proponents of such large-scale acquisitions have criticized the term “land grabbing’ because it carries the stigma of illegitimacy and conjures up images of colonialism or other forms of unethical land acquisitions that were so common in the not so distant past. They point out that land acquisitions by foreign investors are made in accordance with the local laws and that the investments could create jobs and development opportunities in impoverished countries. However, recent reports suggest that these land acquisitions are indeed “land grabs”. NGOs and not-for profit organizations such as GRAIN, TNI and Oxfam have documented the disastrous consequences of large-scale land acquisitions for the local communities. More often than not, the promised jobs are not created and families that were farming the land for generations are evicted from their ancestral land and lose their livelihood. The money provided to the government by the investors frequently disappears into the coffers of corrupt officials while the evicted farmers receive little or no compensation.
One aspect of land grabbing that has received comparatively little attention is the fact that land grabbing is invariably linked to water grabbing. When the newly acquired land is used for growing crops, it requires some combination of rainwater (referred to as “green water”) and irrigation from freshwater resources (referred to as “blue water”). The amount of required blue water depends on the rainfall in the grabbed land. For example, land that is grabbed in a country with heavy rainfalls, such as Indonesia, may require very little irrigation and tapping of its blue water resources. The link between land grabbing and water grabbing is very obvious in the case of Saudi Arabia, which used to be a major exporter of wheat in the 1990s, when there were few concerns about the country’s water resources. The kingdom provided water at minimal costs to its heavily subsidized farmers, thus resulting in a very inefficient usage of the water. Instead of the global average of using 1,000 tons of water per ton of wheat, Saudi farmers used 3,000 and 6,000 tons of water. Fred Pearce describes the depletion of the Saudi water resources in his book The Land Grabbers:
Saudis thought they had water to waste because, beneath the Arabian sands, lay one of the world’s largest underground reservoirs of water. In the late 1970s, when pumping started, the pores of the sandstone rocks contained around 400 million acre-feet of water, enough to fill Lake Erie. The water had percolated underground during the last ice age, when Arabia was wet. So it was not being replaced. It was fossil water— and like Saudi oil, once it is gone it will be gone for good. And that time is now coming. In recent years, the Saudis have been pumping up the underground reserves of water at a rate of 16 million acre-feet a year. Hydrologists estimate that only a fifth of the reserve remains, and it could be gone before the decade is out.
Saudi Arabia responded to this depletion of its water resources by deciding to gradually phase out all wheat production. Instead of growing wheat in Saudi Arabia, it would import wheat from African farmlands that were leased and operated by Saudi investors. This way, the kingdom could conserve its own water resources while using African water resources for the production of the wheat that would be consumed by Saudis.
The recent study “Global land and water grabbing” published in the Proceedings of the National Academy of Sciences (2013) by Maria Rulli and colleagues examined how land grabbing leads to water grabbing and can deplete the water resources of a country. The basic idea is that when the grabbed land is irrigated, the use of freshwater resources reduces the availability of irrigation water for neighboring farmland areas, i.e. the areas that have not been grabbed. This in turn can cause widespread water stress and affect the ability of other farmers to grow crops, ultimately leading to poverty and social unrest. Land grabbing is often shrouded in secrecy since local governments do not want to be perceived as selling off valuable land to foreigners, but some details regarding the size of the land grab are eventually made public. The associated water needs of the investors that grab the land are even less clear and very little is publicly divulged about how the land grabbing will affect the water availability for other farmers. In the case of Sudan, for example, grabbed land is often located on the fertile banks of the Blue Nile and while large-scale commercial farmland is expanding as part of the foreign investments, local farmers are losing access to land and water and gradually becoming dependent on food aid, even though Sudan is a major exporter of food produced by the large-scale farms.
Using the global land grabbing database of GRAIN and the Land Matrix Database, Rulli and colleagues analyzed the extent of land-grabbing and identify the Democratic Republic of Congo (8.05 million hectares), Indonesia (7.14 million hectares), Philippines (5.17 million hectares), Sudan (4.69 million hectares) and Australia (4.65 million hectares) as the five countries in which the most area of land has been grabbed by foreign investors. The total amount of grabbed land in these five countries is 29.7 million hectares, and accounts for nearly 63% of global land grabbing. To put this in perspective, the size of the United Kingdom is 24.4 million hectares.
The researchers calculated the amount of rainfall (green water) on the grabbed land, which is the minimum amount of water that would be grabbed with the acquisition of the land. However, since the grabbed land is also used for agriculture and many crops require additional freshwater irrigation (blue water), the researchers also determined a range of predicted blue water grabbing for land irrigation. For the low end of the blue water grabbing range, the researchers assumed that the land would be irrigated in the same fashion as other agricultural land in the country. On the higher end of the range, the researchers also calculated how much blue water would be grabbed, if the investors irrigated the land in a manner to maximize the agricultural production of the land. This is not an unreasonable assumption, since foreign investors probably do have the financial resources to maximally irrigate the acquired land in a manner that maximizes the return on their investment.
Rulli and colleagues estimated that global land grabbing is associated with the grabbing of 308 billion m3 of green water (i.e. rain water) and an additional grabbing of blue water that can range from 11 billion m3 (current irrigation practices) to 146 billion m3 (maximal irrigation) per year. Again, to put these numbers in perspective, the average daily household consumption of water in the United Kingdom is 150 liters (0.15 m3) per person. This results in a total annual household consumption of 3.5 billion m3 (0.15 m3 X 365 days X 63,181,775 UK population) of water in the UK. Therefore, the total household water consumption in the UK is a fraction of what would be the predicted blue water usage of the grabbed land, even if one were to use very conservative estimates of required irrigation.
The researchers then also list the top 25 countries in which the investors are based that engage in land and water grabbing. They find that about “60% of the total grabbed water is appropriated, through land grabbing, by the United States, United Arab Emirates, India, United Kingdom, Egypt, China, and Israel”. The researchers gloss over the fact that in many cases, land and associated water resources are grabbed by foreign investment groups and not by foreign governments. Just because certain investment funds are based in Singapore, UK or the United Arab Emirates does not mean that these countries are “appropriating” the land or water. In fact, many investment groups that are involved in land grabbing may have multinational investors or investors whose nationality is not disclosed. Nevertheless, there are probably cases in which land and water grabbing are not merely conducted as a form of private investment, but might involve foreign governments. One such example is the above-mentioned case of Saudi Arabia, in which the Saudi government actively encouraged and helped Saudi investors to acquire agricultural land in Africa. While perusing the list of the top 25 countries in which land and water grabbing investors are based, one cannot help but notice that the list contains a number of Middle Eastern countries that are themselves experiencing severe water stress and scarcity, such as Saudi Arabia, Qatar, United Arab Emirates or Israel. Transferring their water burden to Africa by acquiring agricultural land would allow them to preserve their own water resources and may indeed by of strategic value to these countries. However, the precise degree of government involvement in these investment decisions often remains unclear.
The paper by Rulli and colleagues is an important reminder of how land grabbing and water grabbing are entwined and that land grabbing could potentially deplete valuable water resources from under-developed countries, especially in Africa, which accounts for more than half of the globally grabbed land. Even villagers that continue to own and farm their own land adjacent to the large-scale farms on grabbed lands could be affected by new forms of water stress, especially if the foreign investors decide to maximally irrigate the acquired land. There are some key limitations to the study, such as the lack of distinction between private foreign investors or foreign governments that are engaged in land grabbing and the fact that all the calculations of blue water grabbing are based on very broad estimates without solid data on how much blue water is actually consumed by the grabbed lands. These numbers may be very difficult to obtain, but should be the focus of future studies in this area.
After reading this study, I have become far more aware of ongoing land and water grabbing. Excessive commodification of our lives was already criticized by Karl Polanyi in 1944 and now that water is also becoming a “fictitious commodity”, we have to be extremely watchful of its consequences. The extent of land grabbing that has already taken place is quite extensive. An interactive map based on the GRAIN database allows us to visualize the areas in the world that are most affected by land grabbing since 2006 as well as where the foreign investors are located. The map shows that in recent years, Pakistan has emerged as one of the prime targets of land grabbing in Asia, while Sudan, South Sudan, Tanzania and Ethiopia are major targets of recent land grabbing in Africa. The world economic crisis and the recent food price crisis will likely increase the degree of land grabbing and associated water grabbing. The targets of land grabbing are often countries with fragile economies, widespread poverty and significant malnourishment.
As a global society, we have to ensure that people living in these countries do not suffer as a consequence of land grabbing deals. The recent “Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security” released by the FAO are an important step in the right direction, because they attempt to provide food security for all, even when large-scale land acquisitions occur. However, they do not specify water access and they are, as the title reveals, “voluntary”. It is not clear who will abide by them. Therefore, we also need a complementary approach in which clients of land grabbing investment funds ask the fund managers to abide by the FAO guidelines and that they maximally ensure food security and water access for the general population in grabbed lands. One specific example is that of the American retirement fund TIAA-CREF (Teachers Insurance and Annuity Association – College Retirement Equities Fund) which is one of the leading retirement providers for people who work in education, research and medicine. Investment in agriculture and land grabbing appears to be a priority for TIAA-CREF, but American educators or academics that use TIAA-CREF as their retirement fund could use their leverage to ensure socially conscientious investments. Even though land and water grabbing are becoming a major concern, the growing awareness of the problem may also result in solutions that limit the negative impact of land and water grabbing.
Image Credits: Wikimedia - Drought by Tomas Castelazo / Wikimedia - The Union of Earth and Water by Rubens
December 10, 2012
There Was No Couch: On Mental Illness and Creativity
by Jalees Rehman
The psychiatrist held the door open for me and my first thought as I entered the room was “Where is the couch?”. Instead of the expected leather couch, I saw a patient lying down on a flat operation table surrounded by monitors, devices, electrodes, and a team of physicians and nurses. The psychiatrist had asked me if I wanted to join him during an “ECT” for a patient with severe depression. It was the first day of my psychiatry rotation at the VA (Veterans Affairs Medical Center) in San Diego, and as a German medical student I was not yet used to the acronymophilia of American physicians. I nodded without admitting that I had no clue what “ECT” stood for, hoping that it would become apparent once I sat down with the psychiatrist and the depressed patient.
I had big expectations for this clinical rotation. German medical schools allow students to perform their clinical rotations during their final year at academic medical centers overseas, and I had been fortunate enough to arrange for a psychiatry rotation in San Diego. The University of California (UCSD) and the VA in San Diego were known for their excellent psychiatry program and there was the added bonus of living in San Diego. Prior to this rotation in 1995, most of my exposure to psychiatry had taken the form of medical school lectures, theoretical textbook knowledge and rather limited exposure to actual psychiatric patients. This may have been part of the reason why I had a rather naïve and romanticized view of psychiatry. I thought that the mental anguish of psychiatric patients would foster their creativity and that they were somehow plunging from one existentialist crisis into another. I was hoping to engage in some witty repartee with the creative patients and that I would learn from their philosophical insights about the actual meaning of life. I imagined that interactions with psychiatric patients would be similar to those that I had seen in Woody Allen’s movies: a neurotic, but intelligent artist or author would be sitting on a leather couch and sharing his dreams and anxieties with his psychiatrist.
I quietly stood in a corner of the ECT room, eavesdropping on the conversations between the psychiatrist, the patient and the other physicians in the room. I gradually began to understand that that “ECT” stood for “Electroconvulsive Therapy”. The patient had severe depression and had failed to respond to multiple antidepressant medications. He would now receive ECT, what was commonly known as electroshock therapy, a measure that was reserved for only very severe cases of refractory mental illness. After the patient was sedated, the psychiatrist initiated the electrical charge that induced a small seizure in the patient. I watched the arms and legs of the patients jerk and shake. Instead of participating in a Woody-Allen-style discussion with a patient, I had ended up in a scene reminiscent of “One Flew Over the Cuckoo's Nest”, a silent witness to a method that I thought was both antiquated and barbaric. The ECT procedure did not take very long, and we left the room to let the sedation wear off and give the patient some time to rest and recover. As I walked away from the room, I realized that my ridiculously glamorized image of mental illness was already beginning to fall apart on the first day of my rotation.
During the subsequent weeks, I received an eye-opening crash course in psychiatry. I became acquainted with DSM-IV, the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders which was the sacred scripture of American psychiatry according to which mental illnesses were diagnosed and classified. I learned ECT was reserved for the most severe cases, and that a typical patient was usually prescribed medications such as anti-psychotics, mood stabilizers or anti-depressants. I was surprised to see that psychoanalysis had gone out of fashion. Depictions of the USA in German popular culture and Hollywood movies had led me to believe that many, if not most, Americans had their own personal psychoanalysts. My psychiatry rotation at the VA took place in the mid 1990s, the boom time for psychoactive medications such as Prozac and the concomitant demise of psychoanalysis.
I found it exceedingly difficult to work with the DSM-IV and to appropriately diagnose patients. The two biggest obstacles I encountered were a) determining cause –effect relationships in mental illness and b) distinguishing between regular human emotions and true mental illness. The DSM-IV criteria for diagnosing a “Major Depressive Episode”, included depressive symptoms such as sadness or guilt which were severe enough to “cause clinically significant distress or impairment in social, occupational, or other important areas of functioning”. I had seen a number of patients who were very sad and had lost their job, but I could not determine whether the sadness had impaired their “occupational functioning” or whether they had first lost their job and this had in turn caused profound sadness. Any determination of causality was based on the self-report of patients, and their memories of event sequences were highly subjective.
The distinction between “regular” human emotions and mental illness was another challenge for me and the criteria in the DSM-IV manual seemed so broad that what I would have considered “sadness” was now being labeled as a Major Depression. A number of patients that I saw had severe mental illnesses such as depression, a condition so disabling that they could hardly eat, sleep or work. The patient who had undergone ECT on my first day belonged to that category. However, the majority of patients exhibited only some impairment in their sleep or eating patterns and experienced a degree of sadness or anxiety that I had seen in myself or my friends. I had considered transient episodes of anxiety or unhappiness as part of the spectrum of human emotional experience. The problem I saw with the patients in my psychiatry rotation was these patients were not only being labeled with a diagnosis such as “Major Depression”, but were then prescribed antidepressant medications without any clear plan to ever take them off the medications. By coincidence, that year I met the forensic psychiatrist Ansar Haroun, who was also on faculty at UCSD and was able to help me with my concerns. Due to his extensive work in the court system and his rigorous analysis of mental states for legal proceedings, Haroun was an expert on causality in psychiatry as well the definition of what constitutes a truly pathological mental state.
Regarding the issue of causality, Haroun explained to me the complexity of the mind and mental states makes it extremely difficult to clearly define cause and effect relationships in psychiatry. In infectious diseases, for example, specific bacteria can be identified by laboratory tests as causes of a fever. The fever normally does not precede the bacterial infection nor does it cause the bacterial infection. The diagnosis of mental illnesses, on the other hand, rests on subjective assessments of patients and is further complicated by the fact that there are no clearly defined biological causes or even objective markers of most mental illnesses. Psychiatric diagnoses are therefore often based on patterns of symptoms and a presumed causality. If a patient exhibits symptoms of a depressed mood and has also lost his or her job during that same time period, psychiatrists then have to diagnose whether the depression was the cause of losing the job or whether the job loss caused depressive symptoms. In my limited experience with psychiatry and the many discussions I have had with practicing psychiatrists, it appears that the leeway given to psychiatrists to assess cause-effect relationships may result in an over-diagnosis of mental illnesses or an over-estimation of their impact.
I also learnt from Haroun that the question of how to address the distinction between the spectrum of “regular” human emotions and actual mental illness had resulted in a very active debate in the field of psychiatry. Haroun directed me towards the writings of Tom Szasz, who was a brilliant psychiatrist but also a critic of psychiatry, repeatedly pointing out the limited scientific evidence for diagnoses of mental illness. Szasz’ book “The Myth of Mental Illness” was first published in 1960 and challenged the foundations of modern psychiatry. One of his core criticisms of psychiatry was that his colleagues had begun to over-diagnose mental illnesses by blurring the boundaries between everyday emotions and true diseases. Every dis-ease (discomfort) was being turned into a disease that required a therapy. The reasons for this overreach by psychiatry were manifold, ranging from society and the state trying to regulate what was acceptable or normal behavior to psychiatrists and pharmaceutical companies that would benefit financially from the over-diagnosis of mental illness. An excellent overview of his essays can be found in his book “The Medicalization of Everyday Life”. Even though Tom Szasz passed away earlier this year, psychiatrists and researchers are now increasingly voicing their concerns about the direction that modern psychiatry has taken. Allan Horwitz and Jerome Wakefield, for example, have recently published “The Loss of Sadness: How Psychiatry Transformed Normal Sorrow into Depressive Disorder” and “All We Have to Fear: Psychiatry's Transformation of Natural Anxieties into Mental Disorders”. Unlike Szasz who even went as far as denying the existence of mental illness, Horowitz and Wakefield have taken a more nuanced approach. They accept the existence of true mental illnesses, admit these illnesses can be disabling and acknowledge the patients who are afflicted by mental illnesses do require psychiatric treatment. However, Horowitz and Wakefield criticize the massive over-diagnosis of mental illness and point out the need to distinguish true mental illnesses from normal sadness and anxiety.
Before I started my psychiatry rotation in San Diego, I had been convinced that mental illness fostered creativity. I had never really studied the question in much detail, but there were constant references in popular culture, movies, books and TV shows to the creative minds of patients with mental illness. The supposed link between mental illness and creativity was so engrained in my mind that the word “psychotic” automatically evoked images of van Gogh’s paintings and other geniuses whose creative minds were fueled by the bizarreness of their thoughts. Once I began seeing psychiatric patients who truly suffered from severe disabling mental illnesses, it became very difficult for me to maintain this romanticized view of mental illness. People who truly suffered from severe depression had difficulties even getting out of bed, getting dressed and meeting their basic needs. It was difficult to envision someone suffering from such a disabling condition to be able to write large volumes of poetry or to analyze the data from ground-breaking experiments. The brilliant book “Creativity and Madness: New Findings and Old Stereotypes” by Albert Rothenberg helped me understand that the supposed link between creativity and mental illness was primarily based on myths, anecdotes and a selection bias in which the creative accomplishments of patients with mental illness were glorified and attributed to the illness itself. Geniuses who suffered from schizophrenia or depression were not creative because of their mental illness but in spite of their mental illness.
I began to realize that the over-diagnosis of mental illness and the departure of causality that had become characteristic for contemporary psychiatry also helped foster the myth that mental illness enhances creativity. Many beautiful pieces of literature or art can be inspired by emotional states such as the sadness of unrequited love or the death of a loved one. Creativity is often a response to a state of discomfort or dis-ease, an attempt to seek out comfort. However, if definitions of mental illness are broadened to the extent that nearly every such dis-ease is considered a disease, one can easily fall into the trap of believing that mental illness indeed begets creativity. In respect to establishing causality, Rothenberg found, contrary to the prevailing myth, mental illness was actually a disabling condition that prevented creative minds from completing their artistic or scientific tasks. A few years ago, I came across “Poets on Prozac: Mental Illness, Treatment, and the Creative Process” a collection of essays written by poets who suffer from mental illness. The personal accounts of most poets suggest that their mental illnesses did not help them write their poetry, but actually acted as major hindrances. It was only when their illness was adequately treated and they were in a state of remission that they were able to write poems. A recent comprehensive analysis of studies that attempt to link creativity and mental illness can be found in the excellent textbook “Explaining Creativity: The Science of Human Innovation” by Keith Sawyer, who concludes that there is no scientific evidence for the claim that mental illness promotes creativity. He also points to a possible origin of this myth:
The mental illness myth is based in cultural conceptions of creativity that date from the Romantic era, as a pure expression of inner inspiration, an isolated genius, unconstrained by reason and convention.
I assumed that the myth had finally been laid to rest, but, to my surprise I came across the headline Creativity 'closely entwined with mental illness' on the BBC website in October 2012. The BBC story was referring to the large-scale Swedish study “Mental illness, suicide and creativity: 40-Year prospective total population study” by Simon Kyaga and his colleagues at the Karolinska Institute, published online in the Journal of Psychiatric Research. The BBC news report stated “Creativity is often part of a mental illness, with writers particularly susceptible, according to a study of more than a million people” and continued:
Lead researcher Dr Simon Kyaga said the findings suggested disorders should be viewed in a new light and that certain traits might be beneficial or desirable.
For example, the restrictive and intense interests of someone with autism and the manic drive of a person with bipolar disorder might provide the necessary focus and determination for genius and creativity.
Similarly, the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece.
These statements went against nearly all the recent scientific literature on the supposed link between creativity and mental illness and once again rehashed the tired, romanticized myth of the mentally ill genius. I was puzzled by these claims and decided to read the original paper. There was the additional benefit of learning more about the mental health of Swedes, because my wife is a Swedish-American. It never hurts to know more about the mental health or the creative potential of one’s spouse.
Kyaga’s study did not measure creativity itself, but merely assessed correlations between self-reported “creative professions” and the diagnoses of mental illness in the Swedish population. Creative professions included scientific professions (primarily scientists and university faculty members) as well as artistic professions such as visual artists, authors, dancers and musicians. The deeply flawed assumption of the study was that if an individual has a “creative profession”, he or she has a higher likelihood of being a creative person. Accountants were used as a “control”, implying that being an accountant does not involve much creativity. This may hold true for Sweden, but the creativity of accountants in the USA has been demonstrated by the recent plethora of financial scandals. The size of the Kyaga study was quite impressive, involving over one million patients and collecting data on the relatives of patients. The fact that Sweden has a total population of about 9.5 million and that more than one million of its adult citizens are registered in a national database as having at least one mental illness is both remarkable and worrisome.
The main outcome was the likelihood that patients with certain mental illnesses such as depression, schizophrenia or anxiety disorders were engaged in a “creative profession”. The results of the study directly contradicted the BBC hyperbole:
We found no positive association between psychopathology and overall creative professions except for bipolar disorder. Rather, individuals holding creative professions had a significantly reduced likelihood of being diagnosed with schizophrenia, schizoaffective disorder, unipolar depression, anxiety disorders, alcohol abuse, drug abuse, autism, ADHD, or of committing suicide.
Not only did the authors fail to find a positive correlation between creative professions and mental illnesses (with the exception of bipolar disorder), they actually found the opposite of what they had suspected: Patients with mental illnesses were less likely to engage in a creative profession.
Their findings do not come as a surprise to anyone who has been following the scientific literature on this topic. After all, the disabling features of mental illness make it very difficult to maintain a creative profession. Kyaga and colleagues also presented a contrived subgroup analysis, to test whether there was any group within the “creative professions” that showed a positive correlation with mental illness. It appears contrived, because they only break down the artistic professions, but did not perform a similar analysis for the scientific professions. Among all these subgroup analyses, the researchers found a positive correlation between the self-reported profession ‘author’ and a number of mental illnesses. However, they also found that other artistic professions did not show such a positive correlation.
How the results of this study gave rise to the blatant misinterpretation reported by the BBC that “the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece” is a mystery in itself. It shows the power of the myth of the mad genius and how myths and convictions can tempt us to misinterpret data in a way that maintains the mythic narrative. The myth may also be an important component in the attempt to medicalize everyday emotions. The notion that mental illness fosters creativity could make the diagnosis more palatable. You may be mentally ill, but don’t worry, because it might inspire you to paint like van Gogh or write poems like Sylvia Plath.
A study of the prevalence of mental illness published in the Archives of General Psychiatry in 2005 estimated that roughly half of all Americans will have been diagnosed with a mental illness by time they reach the age of 75. This estimate was based on the DSM-IV criteria for mental illness, but the newer DSM-V manual will be released in 2013 and is likely to further expand the diagnosis of mental illness. The DSM-IV criteria had made allowance for bereavement to avoid diagnosing people who were profoundly sad after the loss of a loved one with the mental illness depression. This bereavement exemption will likely be removed from the new DSM-V criteria so that the diagnosis of major depression can be used even during the grieving period. The small group of patients who are afflicted with disabling mental illness do not find their suffering to be glamorous. There is a large number of patients who are experiencing normal sadness or anxiety and end up being inappropriately diagnosed with mental illness using broad and lax criteria of what constitutes an illness. Are these patients comforted by romanticized myths about mental illness? The continuing over-reach of psychiatry in its attempt to medicalize emotions, supported by the pharmaceutical industry that reaps large profits from this over-reach, should be of great concern to all of society. We need to wade through the fog of pseudoscience and myths to consider the difference between dis-ease and disease and the cost of medicalizing human emotions.
Image Credit: Wikimedia Commons Public Domain ECT machine (1960s) by Nasko and Self-Portait of van Gogh.
August 20, 2012
The Rats of War: Konrad Lorenz and the Anthropic Shift
What we might remember most about the London 2012 Olympics are the medal ceremonies. The proud, the tearful, the exhausted, the awestruck, the lip-syncing, and occasionally the unimpressed. We might also call to mind the relative equanimity with which silver and bronze medalists tolerated the national anthems of the winning nation. Nobel laureate Konrad Lorenz (1903-1989), an Austrian zoologist and co-founder with Niko Tinbergen of the field of ethology – the biology of behavior – remarked in his popular book On Aggression (1966) that the Olympic Games are the only occasion when the playing of the anthem of another nation does not arouse hostility. Athletic ideals of fair play and chivalry, he said, balance out national enthusiasm. Olympic sports, you see, have all the virtues of war without all that unpleasant killing and plundering and, importantly, without aggravating international hatred. To surrogate for war, Olympic sports should be as dangerous as possible and should call for a measure of self-sacrifice. This being the case, one wonders why jousting is not an Olympic sport. Perhaps NBC simply chose not to screen it.
The destructive intensity of the aggressive drive that propels us to war is mankind’s hereditary evil, as Lorenz termed it, and its evolutionary origins can be sought in tribal conflict. In the early Stone Age intra-tribal skirmishes would have paid out some evolutionary dividends: dispersion of the population, the selection of the strong and especially in the defense of the brood. But in more contemporary times having overcome our most immediate environmental limitations, that is, not for the most part starving or being prey items, and now that we are equipped with weapons, a more dangerous, indeed an “evil” intra-specific selection prevails. What was once healthy for the species in the form of an instinctive behavior called “militant enthusiasm” has now turned pathological.
Lorenz’s analysis was based upon a lifetime studying a variety of animals, though he is especially known for his bird work. Together with Tinbergen and other classical ethologists he proposed several important hypotheses: behaviors come in constellations of instinctive activities called fixed action patterns; these get released by specific stimuli; the behaviors should be regarded as adaptive response shaped evolutionary forces; the adoption of certain behaviors can be phase specific occurring at certain life stages – for instance, imprinting where young Graylag goslings instinctively mimic their parents, even if the parent is substituted by Lorenz himself! When in 1973 Konrad Lorenz, Niko Tinbergen and Karl von Frisch were awarded the Nobel Prize in Medicine and Physiology for the development of ethology it was recognized that they had created a new science. However, in addition to shedding light of the behavior of lower animals it had implications for “social medicine, psychiatry, and psychosomatic medicine”. If this new discipline had no conceivable bearing on an understanding of the human condition, it is unlikely that the ethologists would have had won a Nobel Prize.
Ethology’s shift from a basic zoological discipline to an applied one was not without controversy among its practitioners, some of whom wanted to restrict it to fundamentals for a more extended period. However, there is, it seems, a special, apparently inevitable, moment in works on animal behavior where the author switches from their account of chimps, bees, fishes, geese, rats or another favored organism and tells us what it means to be human. I call this the anthropic shift. The behavior of the human animal need not be an area of particular expertise for the author; the switch is presumed to be validated by the evolutionary continuity of humans with other animals.
An inclination toward an anthropic shift is anticipated in the work of Charles Darwin. Although the implications of natural selection for humans occupied Darwin for some time before the publication of On the Origin of Species (1859), nevertheless humans are scarcely mentioned in that volume. It took Darwin more than a decade before publishing his version of the anthropic shift which he eventually did in The Descent of Man, and Selection in Relation to Sex (1871) and in The Expression of the Emotions in Man and Animals (1872). One could call this the classic anthropic shift – the author waits a respectful period of time before pronouncing on human affairs.
There are some early attempts in Lorenz’s work to make the implications of his work on the specific behavior of specific organisms apparent for humans including infamously his attempts to reconcile his science with the aims of National Socialism (which I discuss here). It is in Lorenz’s On Aggression, the work of his maturity, where there is a full flowering of his thoughts on human behavior and misbehavior. Although this book is dominated by observations of other animals Lorenz reserves the final chapters of On Aggression for his assessment of human affairs. This version of writing the anthropic shift – the succinct but confident summary of the implications of the study of other animals for human affairs – is characteristic of our age where the scientist has lost all bashfulness in opining on human nature.
In what follows I summarize Lorenz’s diagnosis of the human condition, our current predicament and the remedies he suggested grounded in ethological principles. In the Lorenzian anthropic shift he is attentive to our aggressive tendencies especially the instinctive behavior that he calls militant enthusiasm. If the lessons learned from an ethological inspection of lower animals are correctly applied we might just be able to avert a global catastrophe. Some time soon, no doubt.
An unbiased observer from another planet reflecting on human behavior from a perch close enough to capture the broad strokes of human conduct, but far enough away not to sweat the details of our separate behaviors would surmise that we are rats. Or so Lorenz concluded in On Aggression. The extraterrestrial would infer this based upon the observations that both rats and humans are “social and peaceful beings within their clans, but veritable devils towards all fellow-members of their species not belonging to their own communities.” Our Martian would have more optimism about the future of rats than humans, says Lorenz, since rats stop reproducing when a state-of overcrowding is reached. We do not.
Lorenz provided an edifying, if somewhat chilling, account of rat group-on-group violence, much of which seemingly was worked out in experimental arenas. The work is mainly from one F Steiniger and summarized by Lorenz. Steiniger found that when rats were introduced into an enclosure, aggression grew incrementally after a period of wariness. Once pair formation between male and female rats occurred violence escalated and within a couple of weeks a mated couple typically killed all other residents. Death often came to a rat in the form of peritoneal sepsis – a rat dies of multitude of suppurating cuts. That being said, a skilled rat can deftly inflict a nip on the carotid artery. Exhaustion and nervous-overstimulation leading to adrenal gland disruption were another leading cause of death among beleaguered rats.
The basis of most groups of rats are genetically related families – rat mothers, rat fathers, rat grandparents, rat siblings and rat cousins all getting along with mutual accord. Tender and considerate are rats to members of their family group. Larger animals will, for example, “good humouredly allow smaller one to take pieces of food away from them.” In matters of reproduction they’ll generously step aside and let “half- and three-quarter grown animals…take precedence of the adults.” An intruder, however, is not treated so solicitously and they are routed rapidly and killed by bites. Since rats identify family members by smell, the experimenter can manipulate the odor of an animal and turn a beloved family member into a threatening intruder. Grandpa had never been so bewildered. In one such experiment Lorenz assured the reader, though with a note of apology to the biologist who one supposes will want to view the spectacle to its ghastly end, that the experimental animal was spared his fate and removed into protective custody.
On viewing humans and rats Lorenz’s extraterrestrial may find these species indistinguishable because aspects of their social behavior are so head-scratchingly difficult to fathom. Group hatred between rat-clans and the human appetite for war seem inexplicable viewed functionally. Because of the difficulty in deriving a evolutionary explanation for rat-on-rat attacks from the perspective of natural selection Lorenz obliquely speculated that rat-clan gang fights are the outcome of sexual selection (selection based on differential mating success) where there is “grave danger that members of a species may in demented competition drive each other into the most stupid blind alley of evolution.” But Lorenz is equivocal here, conceding that unknown external factors may still at work. “It is quite possible”, he concluded, that “group hate between rat-clans is really a diabolical invention which serves no good purpose.” That being said, he seems more confident that human group loyalty and generosity arose from tribal conflict. That rat and human tribes evolved cooperative tactics in the face of intra-group conflict, a group selection argument, has fallen out of favor with evolutionary biologists and is the basis for some of the criticism leveled at Lorenz. “The trouble with these books [the books of Lorenz and some other ethologists]”, Richard Dawkins fulminated in The Selfish Gene (1976), “is that their authors got it totally and utterly wrong because they misunderstood how evolution works”.
Humanity’s greatest paradox is that those gifts which we treasure above all others, our braininess and our capacity for speech, are the ones which may bring about our extinction. We have, says Lorenz been driven “out of the paradise in which [we] could follow [our] instincts with impunity.” Our evolutionarily derived capacity for culture confers on humans a facility for rapid change. What we gained with this capacity outstripped the limited injunctions we have against employing this capacity in those circumstances when we should not. Our aggravated competence in mayhem – aggression against others and destruction of the environment, is not sufficiently kept in check. A centerpiece of Lorenz’s claim, one that he repeats in several books, is that species which in the ordinary course of matters have a limited capacity to inflict damage on conspecifics have a correspondingly feeble inhibition against killing. When a dove is trapped with another dove it has no phylogenetically derived compunction against gouging its peaceful neighbor to death. So it is with humans and their rapidly evolving capacity for mischief. We are like a dove that “suddenly acquired the beak of a raven”. We don’t know how to turn the killer off, because we’ve never really had to before.
Lorenz may not have been to first to formulate the thesis that although we are certainly of nature, subject to the same evolutionary laws as other species, we are yet spat out of nature as a consequence of the forces of cultural flexibility. Paul Sears, the American ecologist, wrote in a similar vein in the late 1950s: “With the cultural devices of fire, clothing, shelter, and tools [Man] was able to do what no other organism could do without changing its original character. Cultural change was, for the first time, substituted for biological evolution as a means of adapting an organism to new habitats in a widening range that eventually came to include the whole earth.”
Now the human aptitude for carnage may have swollen beyond the easy reaches of our inhibitions but that does not mean that such moral inhibitions do not exist. Nor does it mean that we cannot amplify them. Balancing our aggression against others is our capacity for love and forbearance within the clan. What Lorenz has in mind is not to coolly rational morality of a Kantian categorical imperative. (Lorenz was, by the by, one of the inheritors of Kant’s professorial chair in University of Königsberg.) The love of which Lorenz speaks is a phylogenetically inherited moral regard for one another. The fate of humanity, Lorenz said, rests on whether this instinct can cope with “its growing burden.”
Manning the defensive walls alongside moral responsibility is our “phylogenetically programmed” love for custom. Institutionalized ritual and custom acts like a skeleton around which a culture develops. Specific rituals are passed from generation to generation. Of course, custom can be irrational and may misfire as it does in the case of “jeering at a fat boy” (Lorenz’s example). Grosser errors still can arise from customs associated with warrior culture, adaptive at one time but obsolete in present ecological and sociological times.
Lorenz cautioned against unconsidered elimination of cultural components, even in the case of “mild reciprocal head hunting” (apparently Margaret Mead’s term). This is because culture develops as an integrated whole. What assembles together sunders – so goes the theory. A possible source of cultural unraveling comes from the mixing of cultures. This was an argument that Lorenz had insisted up since the 1930s when he first pronounced it in a publication calculated to show a resonance between his work and National Socialism. At the time of receiving the Nobel Prize he apologized for his naivety, an apology that satisfied some colleague but certainly not all. The argument remained intact in On Aggression. But in addition to the temptations to deliberately remove unfortunate culture attributes, elements of culture were unraveling as Lorenz saw it under the influence of break in the traditional intergenerational transmission of information. He dates an especially major shift to about 1900. After this kids stopped listening to parents and teachers.
A detailed examination of the case of militant enthusiasm is the centerpiece of Lorenz’s anthropic shift. Enthusiasm, for short, is “a specialized form of communal aggression”, but this behavior interacts with culturally ritualized activities and thus may be controlled by rational insight. In other words, there is nothing we can do to ablate enthusiasm from our behavioral repertoire – the eye may still mist during the National Anthem but Olympiads disincline to jump each other. In fact, this is the nub of the matter: aggression is rooted so deep that it attaches to those things most dear to us. The conclusion from this is that man (Lorenz wrote at a time when “man” stood in unblushingly for all of humankind) was Janus-headed with an evolutionary endowed potential to commit to all sorts of noble things, but meanwhile will readily dispatch his brother for the sake of these same values.
Lorenz’s solutions to the problems of aggression, set out so elaborately in On Aggression, are disarmingly simple; banal, in fact, is his word for them. So simple that one senses that he worried that one might not, after all, have needed all that ethological labor to propose them. There are four solutions: Know thyself, ethologically; cathartically sublimate the aggressive (and libidinous) drives; promote international friendship; and most importantly channel of militant enthusiasm into just causes. En passant, he advises against mere suppression of instincts since aggression builds up hydraulically (an analogy in Lorenz that links him to Sigmund Freud); it cannot long be controlled. You may be glad to learn that eugenic planning is excluded as highly inadvisable. He is also enthusiastic about the role of humor in puncturing the pretensions of those who might lead us along false paths (“we do not as yet take humour seriously enough”).
In his roster of solutions international sport figures prominently as an opportunity to discharge aggressive instincts. The discharge of that particular form of aggression, militant enthusiasm, can be achieved by redeploying them to causes as diverse as civil rights, the prevention of war (though not, admittedly as appealing as war itself), and in the “three great enterprises” of art, science, and medicine.
Lorenz ended On Aggression on a note of optimism. “I believe”, he wrote, “that reason can and will exert a selection pressure in the right direction. I believe that this, in the not too distant future, will endow our descendents with the faculty of fulfilling the greatest and most beautiful of all commandments.”
In 1975 when E. O. Wilson published his groundbreaking and controversial book Sociobiology: The New Synthesis he predicted that ethology would simply be subsumed by sociobiology, behavioral ecology, neurophysiology, and psychology. In fact, by time the ethologists won their Nobel Prize in 1973 the phase of classical ethology was over. So many of the foundation concepts of Lorenz and Tinbergen had fallen into disuse that later in his life a note of exasperation crept into Lorenz’s writing. Thus the apparatus with which Lorenz reached his conclusion was considered largely unnecessary by the contemporary students of human behavior.
This does not mean that Lorenz was wrong. Few biologists might contradict a conclusion that aggression has an instinctive component and that an evolutionary understanding of aggression can contribute to solutions. Nor might many be averse to learning about the nature of war from rats. Nevertheless, extending ethology to humans with a confidence seen in Lorenz’s work might strike many as hubristic. Indeed, it is clear that Niko Tinbergen thought so, and he remained more modest in his claims. But at the end of the day all anthropic shifts may be hubristic, even if such claims are accompanied by that most charming cousin of hubris: unbounded optimism.
It may be apparent to some readers of this piece that there exists an extravagant parallel between Lorenz’s On Aggression and E O Wilson’s new book The Social Conquest of Earth (2012). Like many writers of the anthropic shift both have an expertise in “lower organisms” (Wilson famously is an ant guy); both invoke a group selection hypothesis to explain altruism and loyalty within human tribes; both think that the aggression that leads to war are our hereditary curse (Wilson) or evil (Lorenz); both think that the better and lesser aspects of our natures are at war with one another; both have invoked the wrath of Richard Dawkins in almost identical fashion; both have unbridled optimism about the future, if only we listen to them. This is not the place to explore these similarities though I encourage you to read both books, and if you care to join us in conversation about them (see here).
The anthropic shift, the compulsion to draw upon evolutionary insights from other organisms to bring to bear on the human condition, is solid it seems to me, and both Lorenz and Wilson have important things to say. Nevertheless, the zoological approach taken alone, without insights from humanistic disciplines, or from the social sciences that are committed directly to the study of humans, or from the arts, offers us quite little. After all global events since Lorenz wrote On Aggression suggest that his formula was either unheeded or unworkable on scale that matches the immensity of our problems. Wilson seems to acknowledge this, and makes enthusiastic noises about interdisciplinarity while also noting that pure philosophy has “abandoned the foundational questions about human existence. The responses from both within and beyond his academic discipline nevertheless seem aggressively hostile to his latest attempt to save humankind. Jousting never looked more lethal.
[Note: I was given a copy of On Aggression by my mother as a requested Christmas gift when I was 19. It has, therefore, taken me 30 years to write about it. At this rate I'll have a piece of writing on Infinite Jest in 2042].
 It’s been pointed out that doves do not in fact behavior as Lorenz repeatedly asserted that they do, that is, torture a neighbor to death when that unfortunate neighbor can not escape.
 Sears PB. 1957 The Ecology of Man. [Oregon State System of Higher Education, Condon Lectures.] Eugene, OR: University of Oregon Press.
May 28, 2012
When the Fruit Ripens Seed Scatters: Notes towards a History of Motility
Quum fructus maturus semina dispergat. Linnæus, Philosophia Botanica, 1751
1. In The Beginning Was the Verb
In the beginning was the Verb, and the Verb was with God, and the Verb set all things in motion. More than just any Word (Latin verbum, word) the God who is, was, and shall be a Verb commuted motion of an Absolute form to Relative Motion. In the universe created of the Verb everything moves; absolutes have no meaning.
And some things rose and other things fell. Those which rose remained in constant motion until impeded and of those which fell some acquired spontaneous motion. These self-moved movers, called motile, include some cells, spores, the quadrupeds, and the bipeds. The Philosopher studied the motile keenly, since the prime mover and all that had risen remained less accessible to knowledge. Since the self-moved require the unmoving for motion they must themselves be, he concluded, comprised of a series of both fixed and moving parts at the seat of which is an unmoved mover – the animal soul. In this way the motile mimic the first mover.
Living things move and they share this characteristic with every other thing; stasis, that is, there can only ever be relative stasis. Movement differs from motility in as much as the latter, in its most fully expressed form, is movement where a purpose that goads, a desire that compels, and a body that advances, converge.
2. Arise and Be Bipedal
Humans possess an unusual form of bipedality technically called walking. Walking emerged earlier than did a brain large enough to befuddle us regarding our destination or pensive enough to cogitate walking’s origins. It is the oldest of our peculiarities and the process and its origins remains fruitfully perplexing. As engineer Tad McGeer designer of passive walking machines wrote more than a couple of decades ago: “Today we can build machines to travel beyond the other planets, yet we do not really understand how we move about on our own two legs.” But there are no shortages of bright ideas about the phenomenon. Like other bipedalisms (that, for instance, of dinosaurs, birds, lizards, kangaroos, ostriches, and even cockroaches when one provokes them appropriately) walking merits examination from an energetics perspective. Energy spent on slower movement (compared to running, that is) is reimbursed by the energetics of pendular action: a leg swings out from the hips, followed by the succeeding leg as the first leg performs an inverted pendular motion from heel to toe. All accompanied by arm swinging. Sporting a jaunty hat remains a human innovation. Thus a series of fixed and moving parts propels the animal along with relatively little energy wasted. All bipeds are Aristotelian, though for the most part unwittingly so.
Of certain squabbles it can be said that they are productive without being settled; of others that they are unsettling without being productive. Questions concerning human origins remain both unsettled and unsettling. While considerations of energetic efficiency, especially over longer distances, point to a selective advantage for walking, nevertheless there is little agreement on what the most parsimonious explanation might be. Walking frees up the hands for foraging, for carrying the children, it provides the tropical sun with a diminished target and thus may be thermodynamically recommended and so forth.
Hominins have walked the earth for four million years or so. Four million years of ambulating with purpose. Since things did not come to us, we marched off to them. That is, human mobility, however it was achieved, and to whatever selective pressure it was a response, was always a walking to. Food goaded, human appetites compelled, and an erect body complied.
3. Let Them (foodstuffs) Come Onto Me
Though a person might well walk and chew gum and the same time, it’s unlikely that she will walk and write at the same time. Nietzsche’s aphorisms may be the closest we have to mobilography – writing born on the hoof. Writing may overcome space and time but it also, with consequences, impedes movement. History, therefore, is a report by the sedentary (Latin sedēre, to sit) written for the stationary. Not surprisingly academic disquisitions prioritize fixity over mobility. Even the lives of nomads have typically been characterized as fanning out from an immobile sacred center.
Sedentarism is a plant’s revenge. The late Peter Wilson, the New Zealand anthropologist, in his now classic account of the origins of architecture, The Domestication of the Human Species, pointed out that while we were busy domesticating plants and animals, they were reciprocating by domesticating us. We fumbled around with their edible reproductive parts; they conferred upon us their rootedness. So, permanent architectural structures and the Neolithic revolution coincide in their origins. Both the domestication of creatures and the setting up of a domicile called for a settling down – a cessation of movement that, though not absolute, was decisive. Agnostic though one might be about the progressive nature of the agricultural revolution, nonetheless the implications are such that civilization can be seen as a pimple on that revolution’s ample rump. On the basis of an agricultural productivity beyond the threshold of mere subsistence, the accoutrements of civilization emerged: a high degree of occupational specialization, writing, the growth of cities and so on. We traded mobility in the larger landscape for access to a larder. And even though our scholarly sensibilities may rail against so simple a dichotomy as nomadic versus sedentary lifestyles (and the correlates attendant to each), nonetheless one must resist being so refined as to reject a real discontinuity when we stumble across it.
Humans and their domesticated plants and animals have their place. In fact they make their place. Place, as the human geographers have told us, is space made personal. Proust’s madeleine – ten thousand years of post-agricultural history clarified and made delicious – conjured up an instance and a place, and not merely space-time co-ordinates (though it does that too). If the primordial ecology of our species was fashioned by traversing to things, the reversal involved in agriculture was that we are now bound to things in a place.
4. Though I Scattered Them Among the Nations
The sound of dehiscence is a barely-audible pop. It is the process by which anthers, follicles, some fruits, spherules, pods and other biological capsules explode and release their mature contents. Less gloriously, the term is also reserved for the rupturing of a surgical wound, either superficially or completely, releasing the infected flesh from the strain of the suture. Whether the Great Dehiscence of the human population during the Age of Discovery can be considered a triumph or calamity: the scattering of the matured human seed or a gangrenous discharge from an exploded wound will, I supposes, depend on one’s perspective.
In the view of prehistorian Grahame Clark a distinctive attribute of humans is that they perceive the spatial and temporal dimensions of their environment more consciously and decisively than other animals. In freeing ourselves of some of our more immediate telluric constraints we extend a conception of space over progressively larger territory. Thus, Henry the Navigator (1394–1460), a Portuguese prince, exemplifies the esprit of early modern exploration. His achievements were more cerebral than swashbuckling. He recruited Arab scholars, Jewish merchants and mariners from around Europe to create maps that collated the most precise geographical information of the age. He encouraged changes in on-board instrumentation for calculating latitude. His fame, therefore, in some circles is more for his cerebrations concerning space than for his acumen in personally navigating it. Although he accumulated great wealth from West Africa for the Portuguese, he himself never joined in on an expedition there.
Less perfervidly, however, one might rename the Age of Discovery as the Age of Invasion, Conquest, and Occupation. Evaluated from this perspective Prince Henry appears more savage than savant. For example, he commissioned the design of the caravel, a vessel better equipped than the more traditional barca for traversing the treacherous waters of the West African coast. It was, of course, a craft perfectly suited to the task of plunder. The Portuguese made it as far as Cabo Branco (now, Ras Nouadhibou, Mauritania) in 1441. Within two years of this they were shipping back slaves to Portugal, a task for which the caravel was coincidentally well equipped. This was a defining early moment in the modern Atlantic slave trade.
The dehiscence of early modern Europe is thus a threshold event in the history of human motility. On the basis of the stored energy from domesticated plants and animals, and the subsequent accumulation of cultural ingenuity, social stratification, and the attrition of resources and landscapes, the merchant countries of Europe were ready by the 15th Century to teem across the globe.
Humans overcome the fear of being touched when they form a crowd said Elias Canetti in Crowds and Power. An important moment in the genesis of a crowd comes when differences are discharged and all members are placed on an equal footing. But that happy moment is just an illusion – they are not equal. The thousand of years of human sedentary life was a lengthy gestation of the multitude, or a swarm. Now, in a bee swarm apparently the insects take off to a new nest site with only a few individuals knowing the location of the new site, yet these few individuals guide the swarm to their new home. So, it is with humans. The human swarm in the days of European exploration represented the migration of the many at the behest of the few. In this manner, contemporary migrations differ strikingly from the peregrination of early bipedal hominins.
5. Take up your Gadgets Daily…
Three themes of contemporary life are the compression of space and time and the miniaturization of the object. The agricultural revolution compressed space by bringing the necessities of lives to our door; while also, it must be said, creating the door. The age of exploration and exploitation (which I term the European dehiscence) compressed time (and space) by making of our globe a more easily traversable marketplace. Finally, Steve Jobs compressed the object making gadgets that can flit around the now tinier globe in our hip pockets. And when I say Steve Jobs here, I naturally mean to perch him on the shoulders of the giants of miniaturization.
The miniaturization of technology and the portability of objects is part of an evolutionary progression, according to Italian born architect Paolo Soleri, whereby complexity increased over time and which in turn, he thinks, should be linked to miniaturization. Arcology, Soleri’s name for his combination of architecture, urban planning, and ecology, is based upon the notion that large systems dissipate energy, but small ones conserve it. Arcosanti, the town being built (slowly, very slowly) according to Soleri’s designs will occupy only two percent of the footprint of conventional towns of comparable size.
Miniaturization thus has two dominant flavors. One is consistent with environmental concerns where we scale back some dimensions of the human enterprise. Since the global footprint of the 7 billion of us is now greater than the biocapacity of the globe (that is, we are living by drawing down natural capital). miniaturization is an ultimate objective of Soleri‘s designs. The other trend provisions us with portable devices. If the physical plant is the symbol of industrial times, the iPod is the fruit of these…let’s call them post-industrial times – both terms have pleasing references to vegetation, the plant rooted, the pod prepared to dehisce and disperse.
Though one might think that the nanofication of devices gets us back to some sort of ur-technology – the tune-packed iPod as equivalent to the chipped flint in the hands of a hunter – however the portable device is typically hiding its significant mass elsewhere (the entailments of production and waste). The conflicting trends in miniaturization can take us in two directions – the first is an environmentally motivated reduction that pulls us back within the limits of the planet, the second is a miniaturization that gets us off this planet. Interestingly though, Elon Musk, a co-founder of SpaceX whose craft, the Dragon, just docked with the International Space Station, stresses environmental concerns in touting multiplanetary life as a plan for guaranteeing human survival.
In his book The Invisible Pyramid (1970), written right after the first bipedal stepped onto the moon, Loren Eiseley contemplated the inner and outer space of humanity. In a chapter called The Spore Bearers he compares us to the fungus Pilobulus whose countless spores get hurtled away from the capsule in which the matured. Though the story of humans in space may not have progressed as rapidly as some in 1970 may have predicted it may yet be the case that our most unbridled motility is just ahead of us.
All things move, some things are motile; motile humans rose up and peregrinated across Pliocene savannas; a complicity with plants ended our peripatetic ways, plant and man settled down; the relatively vast populations of the Old World dehisced and pullulated across the globe; contemporary humans conferred mobility on things that they formerly left behind; the human enterprise marched to limits of the globe; some urge curtailment, while others watched optimistically as the SpaceX Dragon connected to the International Space Station
….and there shall be no night there; and they need no candle…
Photo Credit: The photograph of running legs is by Randall Honold. The editor generously donated the sperm. The idea for this piece came up during a conversation with my DePaul University Human Impacts on the Environment Class - those kids are the best!
February 06, 2012
The Human Peacock’s Ghastly Tail
“He was violent?”
She exhaled. “I don’t know. What’s ‘violent’ anymore? He was a teenage guy. Then, a guy in his twenties."
—Richard Powers, The Echo Maker
Once upon a time, there was an editor of a short-lived academic journal called Evolutiona Pathologica who was fired in disgrace. In an interview published after his dismissal, the editor, a notoriously fastidious man, reported that papers in his journal often had a pronounced impact on the field primarily because they were unsound; unsound in their conception, imperfect in their analysis, defective in their conclusions drawn from meager data, and inflated in the claims they made about their practical implications. The papers were often wide of the mark, he conceded, and even occasionally bonkers. Yet, many papers were masterpieces precisely because refuting the claims strengthened the subdiscipline of evolutionary pathology. Or so he said.
Recently, while archiving the material from the defunct journal, I reread the manuscript the publication of which resulted in the editor’s dismissal. I also discovered an internal report on the dismissal that shed light on the case..
Before reproducing the offending paper – some of you, of course, will remember it well – I’ll remind you of some of other mildly controversial pieces that appeared in the journal. For instance, in a rather famous special issue on the pathological origins and implications of bipedality, Professor J. P. X deRossa-Ellman made the celebrated claim that upright walking evolved to reduce the overstimulation of reflexology points on the hands and to intensify the quality of the massage on the feet. “As hominins shifted from an arboreal habitat,” deRossa-Ellman opined, “pressure on the hands, especially on the zones associated with the small intestines inclined Australopithecines to a frightful gassiness. In contrast, the laudatory effects of passively massaging the feet by walking on the dewy grasses of the East African savannah produced a sense of well-being that disposed our primitive forbears to recreational coitus. Those more upright proto-humans joyously copulated thus leading to increased fitness.” To the embarrassment of the journal it was later discovered that deRossa-Ellman ran a specialized massage parlor on the near North side called “Strange Beginnings/Happy Endings”. He also did a brisk business selling “genuine savannah grass”. Apparently you could also smoke the stuff.
In another issue on evolutionary patterns in the peoples of Ireland a rather tartly written article appeared where Dr. Quentin Yeatly-Bawn claimed that the evolution of the mesmerizingly large cranium of Irish men was an adaptation designed to distract the colonizing usurpers of that island nation from what an Irish man was doing to them with his hands. A response which ran under the title: Q Yeatly-Bawn is out of His Tiny Mind pointed out that though Irish foxes, stoats, and otters have large heads, Irish men were moderate in this respect at least.
These small skirmishes provoked a mildly negative response in comparison to the more controversial piece, the one that triggered the editor’s removal, which I reproduce in full, although it reads in a fragmentary way. In the archived box of material associated with the journal I found several of the reviewers’ comments on the piece; I also provide excerpts of these. The paper was published anonymously, which may have been part of the problem.
Evolutiona Pathologica 5: 12-17 NOTES AND OPINIONS
Scary Bastards and Sexy Wreckers: A Short Note on the Sexual Selection of Environmental Destructiveness
Evolution is a reality-based game where the score is tabulated exclusively by the number of extra lives a player accumulates. Evolution occurs not because organisms desire to play but because successful players sire those who incline to continue the game using the same rules of their progenitor. Technically this is captured by the term “fitness” in evolutionary biology – a measure of a genotype’s reproductive success (rated by surviving progeny) as compared to the survivorship of those of competing genotypes.
In addition to those characteristics of organisms that increase their ability to survive and reproduce, many organisms sport features and behaviors that may appear detrimental to their survival. The peacock’s tail is emblematic here – tail feathers so extravagantly developed that they largely confine the bird to the ground, increasing predation risk. To explain such seemingly paradoxical characteristics, as well as to explain the advantages that some individuals have in relation to reproduction, Charles Darwin proposed his theory of sexual selection. The theory can be helpfully applied in explaining a range of phenomena including pronounced showy plumage, sexual dimorphism, insect and bird song and so forth.
The mechanisms driving sexual selection include competition within sexes (intrasexual selection) and mate choice between the sexes (intersexual selection). Competition for access to mates is generally more prevalent in males, whereas choice of mates more prevalent in females. This is because of the differential costs involved in reproductive success in the sexes. Sperm is cheap and copious; ova and investment in child-rearing are expensive. Therefore solving the evolutionary arithmetic problems of enhancing fitness produces strategies that are pronouncedly different in males and females. As Darwin concluded: “the greater size, strength, courage, pugnacity, and even energy of man, in comparison with the same qualities in woman, were acquired during primeval times, and augmented, chiefly through the contests of rival males…” Complicating these quite simple distinctions between male and female reproductive strategies is the observation that males may increase their fitness by investing in childrearing, and women can increase their fitness by extra-pair matings with males of high quality. Mating strategy will vary with sex, with age, and even with stage in menstrual cycle. The way in which individuals “play” the game of enhancing their fitness continues to surprise those who study mating behavior, by which I mean it should surprise all of us.
In this note I propose that environmental destructiveness, the patterns of which have evaded the attention of evolutionary thinkers, is largely, but not exclusively, driven by intrasexual male contest competition for access to mating opportunities. When beard volume, voice timbre, penis size, and physical blows have insufficiently cowered the competition, then throwing a maladaptive spanner in the works of nature serves as an evolutionary escalation in the struggle for mating opportunities. Male contest competition leading to environmental despoliation may be strengthened by intersexual selection whereby women read environmental power as a signal for genetic quality. Thus environmental destruction escalated under the combined influence of competition between men, and mate selection by women. When male destructive behavior is fostered by the former process I label these men scary bastards. When fostered by the latter I label them sexy wreckers.
This hypothesis builds upon the following observations:
1. Men are more inclined to aggression and violence then women. This inclination is evolutionarily derived from contest competition for access to and monopolization of potential mates.
2. The ability to impose environmental destruction is correlated with other dominant male attributes and can similarly be interpreted as depriving rivals of mating opportunities. Unlike other expressions of male dominance the ability to inflict environmental violence may peak much later in life than other indicators. Environmental vandalism, like the accumulation of wealth, may be an old man’s game. As such it is as likely to be strongly influenced by female choice as well as by contest competition.
3. Women may not necessarily find environmental destructiveness attractive. As Darwin noted females may accept “not the male that is most attractive to her, but the one which is least distasteful.” Unlike some male attributes, like muscularity and “bad boy” indicators, which are valued in short-term partners, environmental destructiveness may however be valued in long-term relationships if it signifies power and status. Environmental despoilers tend to be married but are, presumably, frequently cuckolded.
4. Since destructiveness is more common than creativity in men, one can conjecture that in prehistoric times vandalism was a more successful strategy than artistic production.
5. Environmental destruction has increased in contemporary times when most other forms of male contest competition have been minimized. This suggests a remedy which I will discuss below.
The Sexual Selection of Greater Male Direct Aggression
Men are more aggressive than women in the categories of physical aggression, verbal aggression, and hostility. Women, apparently, are just as angry . When the assessment of aggression is extended to include so-called manipulative forms of aggression the differences between men and women become less apparent. That is, women are proficient at gossiping, spreading rumors and so forth, and this may be a more successful strategy for social exclusion when the cost of direct aggression is high . Since many of the more pronounced physical differences between men and women including greater male mandibular strength, greater muscle mass etc. relate to the ability to both inflict and absorb aggressive blows, it seems reasonable to conclude that for men the cost of escalating violence paid some evolutionary dividends, but for women it did not.
The differences between male and female levels of direct aggression as well as the relatively greater female fear of aggression are read as evidence for the sexual selection of male aggression . When this data on direct aggression is put alongside data on the greater male than female variance in reproductive success, the existence of several male display characteristics, both vocal and visual, the relatively greater mass and strength of men over women size the case for sexual selection as the explanatory process appears convincing .
Summary: Environmental destructiveness is a special category of aggression directed extra-somatically and depends not upon the ability to trade physical blows but rather upon the ability of males to extend contest to the broader environment. Men who can inflict the most reckless damage on their environment (a part of their inclusive phenotype) are scarier and thus intimidate less destructive men who then concede mating opportunities to these dominant males (= scary bastards).
The Problem of Older Men
Males of polygynous species (where males have multiple mates simultaneously) will typically avoid encounters with older males till they are sufficiently mature to physically compete. If humans can be regarded as having polygynous tendencies then young adulthood is a risky time for males – enough testosterone to dull the fear of violence, but insufficient physical strength to compete reliably with mature males . Older males are also at risk in physical encounters as they enter a physical decline when they are in danger of being disposed by younger competitors. Since peak physical condition is predictive of success in contest competition, one might expect most pair bonding between men and women in the full bloom of young adulthood would be the norm. A discrepancy in age of mates in monogamous pair bonds is typical though. The reason for the discrepancy is that wealth and status in males denotes a capacity to provision mates and offspring with resources and should be a selection criterion applied by females to potential mates . A fifteen year age difference is optimal .
Summary: Environmentally destructive tendencies provide a conspicuous metric of male wealth and power and therefore destructive men, (sexy wreckers), have clearly been driven by the sexual appetites of women.
Creation and Destruction
Suggestions that male creativity, quick wittedness, brain size, and intelligence result from female mating selectivity has been challenged on a number of grounds . Evidence for the mild heritability of intelligence and a correlation between intelligence and sperm quality are presented as evidence for this . The hypothesis that male braininess is sexually selected by female choice seems to be contradicted by the number of feeble minded men that appear to be successfully mated, and perhaps more glaringly by a lack of pronounced difference in male and female intelligence. From the perspective of defending the mating-mind hypothesis, women are frustratingly brainy.
Summary: In contrast to inconsistent evidence for the emergence of male creativity and humor as a result of female mate selection, the evidence, at first glance, is better that environmental destruction is sexually selected. Males are more directly environmentally destructive. Some of this may build upon traditional roles. For instance, hunting and managing lands to improve hunting opportunities imposed significant damage. I speculate that ethnographic evidence will support the view that men are more recreationally aggressive with the environment.
Conclusion and Remedy
Darwin noted that the difficulty in regard to sexual selection, “lies in understanding how it is that the males which conquer other males, or those which prove the most attractive to the females, leave a greater number of offspring to inherit their superiority than the beaten and less attractive males. Incontrovertibly, environmentally destructive tendencies like other male displays, for example, outsized penises, seem largely unnecessary, objectively unlovely, undeniably destructive, but for all of that, fearsome to other men and preferred by the ladies. That is, both are subject to both intra and intersexual selection.
In prehistoric times opportunities for environmental destructiveness beyond that necessary to meet basic needs was limited. In contemporary times environmental destruction can be conducted on planetary scales. This is clearly the result of a runaway selection and is exacerbated by legal curbs on male-male aggressive competition, other than in the athletic arena. Since environmental destruction is both expensive and risky, it both increases the quality of the fitness signal and exacerbates the risks that none of us will be around to enjoy the other pleasures that being a sexually reproducing species bring. The remedy is simple: we need to invite men to resolve their contest competition in lower risk situations (e.g. fight clubs) rather than at a global scale in war and destruction, and furthermore, request that women forgo the dubious pleasure of mating with men who are not committed to environmental sustainability.
1. Buss, A.H.; Perry, M., The aggression questionnaire. J. Pers. Soc. Psychol. 1992, 63, 452-459.
2. Archer, J.; Coyne, S.M., An integrated review of indirect, relational, and social aggression. Personality and Social Psychology Review 2005, 9, 212-230.
3. Archer, J., Does sexual selection explain human sex differences in aggression? Behav. Brain Sci. 2009, 32, 249-+.
4. Loeber, R.; Hay, D., Key issues in the development of aggression and violence from childhood to early adulthood. Annual Review of Psychology 1997, 48, 371-410.
5. Nettle, D.; Pollet, T.V., Natural selection on male wealth in humans. Am. Nat. 2008, 172, 658-666.
6. Helle, S.; Lummaa, V.; Jokela, J., Marrying women 15 years younger maximized men's evolutionary fitness in historical sami. Biol. Lett. 2008, 4, 75-77.
7. Miller, G.F., The mating mind: How sexual choice shaped the evolution of human nature. . Anchor: 2001; p 528.
8. Arden, R.; Gottfredson, L.S.; Miller, G.; Pierce, A., Intelligence and semen quality are positively correlated. Intelligence 2009, 37, 277-282.
The reviewers’ comments on the paper, with the exception of one laudatory set of remarks, were negative. “This author knows next to nothing about the field of sexual selection or environmental psychology. In addition to displaying a poor command of the literature, the writing is second rate, the development of the argument third rate, and the conclusions trivial.” “This contribution is made moot by the widely acknowledged demolition of the field by Professor Joan Roughgarden.” “Good luck with the review board getting approval to test any of these trite conjectures.” The one positive reviewer wrote: “A breakthrough…testable hypothesis…real solutions….” and so on.
Presumably it was this reviewer’s comment that the editor relied upon in making his final decision to publish the paper.
Within a week the journal received negative comments from a couple of dozen scientists who complained that the published note had no merit. The journal recorded that the editor had stepped down after an internal enquiry concluded that he had ignored the advice of most reviewers of the manuscript.
In addition to the material I have already reproduced, a report by the journal’s board on the dismissal case came to light in my investigations.
The inquiry revealed that Scary Bastards and Sexy Wreckers had, in fact, been written by editor himself. The laudatory review may also have been penned by his hand. When asked for comment, the editor stated that though the “all-male board” may question the ethics of his conduct, nonetheless his wife had simply loved the article. And that, he concluded, “is the name of the game.” In turn the board chose not to reveal the identity of the writer.
The editor, the board and all their progeny lived happily ever after.
The following review was very useful in preparing this tale: “Beauty and the beast: mechanisms of sexual selection in human” by David A Puts from Evolution and Human Behavior (2010) Volume: 31, Issue: 3, Publisher: Elsevier Inc., Pages: 157-175
Photo of Kaveri River by Randall Honold.
January 09, 2012
A Tiny Dying Such as This – Is There an Ongoing Mini Mass Extinction of Soil Invertebrates in the Midwest?
A short note in which I conjecture on a potentially vast local extinction event of Midwestern soil organisms especially of those inhabiting the leaf litter of woodlands.
In our evolutionary progression humans scrambled from the leafy treetops about half way down the length of the trunk. We now live perched between treetop and root ball on that convenient platform we call the soil. If physicists can give themselves vertiginous shivers by imagining those empty atomic spaces that constitute the seeming sturdiness of ordinary things then it is surprising that soil ecologists ever leave their homes knowing as they do how vastly crenulated, fissured, fractured and porous is the soil.
Ours is the exceptional ecological enterprise since more organisms live in the soil in those porous and interstitial lodgings than on the soil. We are not directly equipped for flight, we rarely burrow, we are condemned to walk upon the dirt until at last we may complete our descent into the ground, toppling into that large furrow excavated for our remains. A soil pore will have us after all.
If we had been just a little smaller and had migrated just a little further down the length of that primordial tree we’d be living in one of the most biologically diverse and ecologically active compartments of the biosphere. The upper ten centimeters or so of soil teems with living things. The organisms living in Earth’s thin and hyperactive rind are phylogenetically diverse, trophically heterogeneous, functionally assorted, highly variable in size, dissimilar in longevity, variegated in morphology, behaviorally divergent, adapted to different soil horizons, disparately pigmented, but are united in their reliance on death. Specifically, soil organisms are all similar in that they feed on detritus (i.e., dead organic matter). As I discussed in a recent column, collectively the action of these organisms within detrital-based food webs results in the breakdown of dead organic matter and the mineralization of organic compounds that makes key nutrient available to the living.
Examine your foot a moment. If it is like mine when shod it measures roughly 30 cm in length (yes, a foot) by about 9 cm wide (your foot, of course, may not be quite so rectangular!). A pair of feet such as these out for a stroll treads minimally upon the bodies as 270,000 protozoa, 135 mites, 3 springtails, and one or more large earthworms with each footfall. In places of high animal density the injury toll would be higher by several orders of magnitude. If you were sallying along a woodland path in the temperate zone these crushed critters will be representative of about 30 distinct and species of which up to half may be previously undescribed by taxonomists. Scaled up there can be as many as 200 species of soil insects and 1000 species of soil animals in total in every 1 m2 of soil.
These soil animals are drawn from many taxonomic groups: protozoa, nematodes, rotifers, tardigrades, springtails, mites, the preposterously adorable pseudoscorpions, insects from many orders, centipedes, millipedes, and on and on.
Conservationists need to pay more attention to soil organisms because they are a very large component of the biological diversity at many sites set aside for the conservation of species. They also play a role in the regulation of nutrient availability and this in turn exerts a large influence on a site’s biological diversity. So even if one was not as charmed by a soil mite as by, let’s say, a Northern Hairy-nosed Wombat (one of the rarest of our larger mammals), nevertheless, the functional significance of the soil mite should persuade you that it deserves a little of your attention. Soil critters are examples of what biodiversity guru E. O. Wilson once described as the “little things that run the world”.
In the last couple of years my lab has initiated investigations on the diversity of soil organisms and their significance in regional conservation efforts. We are addressing these questions in ongoing restoration projects designed to conserve biodiversity in and around Chicago (see the map below of the 100 1 hectare sites we are examining in collaboration with managers in 4 counties surrounding Chicago). These sites, in woodland, savanna and prairie habitats, are heir to the typical problems associated with open space in a major metropolitan setting – they are highly disturbed, heavily invaded, eutrophied, and fragmented. We are especially interested in learning how our current best conservation practices influence the composition of these below ground communities and, assuming such practices are altering these biotic communities, we want to know the influence these soil critters have on ecosystem processes.
Our studies are still in their early stages. One thing is clear to us though: there is a high probability that soil organisms are going locally extinct in woodlands around Chicago at rates faster that we can study them comprehensively. This may be true especially of those living in the litter layer of partially decomposed plant material.
If soil animals hummed as the ambled like Winnie to Pooh the sound of their productive murmurs would have noticeably dimmed in recent years. More silent that a bird-less spring is the silence of a habitat from which inconspicuous creatures have imperceptibly slipped away.
This vast dying of tiny things in Midwestern woodlands is a conjecture at this point. We simply do not have enough information on the issue to state it definitively. But the conjecture is nonetheless backed up with some evidence. I review a few relevant points to clarify what is a stake and what the major threats to Midwestern soil biota might be.
Temperate zone soil biodiversity is “The Poor Man’s Tropical Rainforest”
We know very little about soil organismal diversity in the Midwestern United States. Taxonomic experts admit, for instance, that only a fraction of soil arthropods have been described. For mites it may be as few as 5% of all species globally, less than 50% in the temperate zone. Most groups of organisms increase in diversity from poles to tropics where life flourishes best. To put it as did Jim White from National University of Ireland to us biology students in the 1980s: life is a tropical affair. We do not know much about these so-called latitudinal gradients of soil animals though the evidence is coming in that for many groups, species diversity peaks in the temperate zone (for example, the diversity of soil mites and free-living nematodes appear to peak at mid-latitudes). The density of many soil critters also peaks in temperate regions. For this reason the community of soil organisms in the temperate zone has been referred to by Michael Usher as the “poor man’s tropical rainforest”. The significance of this is that conservationists working in the mid-latitudes have a special responsibility for the conservation of these species. In Chicago where extensive tracts of openspace are set aside for conservation and restoration purposes we need to be confident that our conservation management is protecting cryptic biota below-ground.
Dominant invasive plant species and the creation of Interspersed Denuded Zones (IDZs) in the Forest Preserves
There are many stressors in Midwestern environments that may have a negative impact on the diversity of soil biota. These include fragmentation of habitat, anthropogenic nitrogen deposition from the atmosphere, elevated heavy metal concentration in soils, and altered soil hydrology to name a few. In particular I have been interested in one aspect of change in the woodlands of the Chicago region: Many of the dominant invasive species in lands of conservation concern close to the city can have very high decomposition rates and this, for readily understandable reasons, can have a disproportionate influence of species loss. For example European buckthorn (Rhamnus cathartica), a rarity in its native range, has become the dominant woody plant in Chicago’s Forest Preserves. The leaf of this handsome shrub is easily decomposed and unlike many of the native species that it replaces this litter is fully decomposed before it is replenished in autumn. As a consequence a series of interspersed denuded zones (IDZ) open up intermittently in woodlands. From the perspective of litter dwelling arthropods perspective this is like the mass clearing of a housing project. Leaf litter provides habitat for a vast diversity of species. In addition, the litter modulates the physical conditions of the upper layers of the soil which also harbors a large diversity of organisms. Several years ago undergraduate researcher Brad Bernau examined the abundance of diversity of soil microarthropods (mites and springtails) in standardized samples of litter (255 cm2 grabs) in several woodlands and found that diversity and abundance was lower in IDZs and moreover diversity stayed low even after litter was replenished. Bernau’s study needs to be conducted on a much grander scale to assess this phenomenon. In recent years PhD candidate Basil Iannone (University of Illinois, Chicago) has been developing the most comprehensive observational database yet on buckthorn and although he is not looking at soil arthropods, his work will give us unprecedented insight into the implications of this species on the environment of woodlands in our region.
Invasive earthworms accelerate breakdown of woodland floor
In addition to changes in the dynamics of the woodland floor as a consequence of shrubby invasion, these woodlands are also invaded by non-native earthworms. Worms are titans in the kingdom of decay and they contribute to the breakdown of the woodland floor and to the creation of denuded zones. The significance of worm-work is accented when one recalls that the ecological systems of the midwest developed in the absence of these animals.
Loss of litter dwelling species
Putting this together we can say that conservationists in the US Midwest have a global responsibility for protecting the diversity of soil animals whose numbers peak in the temperate zone. Areas set aside for protecting nature need to designed and managed in ways that achieve this aim alongside other priority species and processes. Although the evidence that the vast diversity of Midwestern soil critters is undergoing a mini local extinction event is indirect, it is enough to warrant serious investigation.
A thought that haunts me: In the 1990s I worked on the diversity of soil arthropods in Costa Rica, Puerto Rico, Hawaii and in the Southern Appalachians. The leaf litter at Coweeta Hydrologic Laboratory in North Carolina was thick and was home to an almost unimaginably vast diversity – larger than at the tropical sites. Small samples of the litter in a single 100 m2 patch of forest floor at Coweeta yielded well over a hundred species of soil mites alone. In contrast graduate student Claire Gilmore from DePaul recently surveyed mites at 11 sites throughout the Chicago region as part of our 100 Sites project found about half that number. Though the studies are not directly comparable they should give us pause.
Humans migrated from ancient canopies, a habitat of unparalleled species diversity, to the soil surface. Now below our feet is what Belgian taxonomist Henri André called the “other last biotic frontier”. Assemblages of soil arthropods are exceptionally diverse, functionally significant and vastly understudied. For those of us who see the challenge of biodiversity conservation as saving all the pieces, our challenge has become a little muddier than before. Our new motto: Ad terram – to the soil!
Thanks to Vassia Pavlogianis who collaborated in coining the term Interspersed Denuded Zones. Funding for some of work on soil biodiversity comes from The Gaylord and Dororthy Donnelley Foudation, Chicago.
Photo Credit: Soil microarthropods (from http://www.fao.org/ag/agl/agll/soilbiod/soilbtxt.stm), Interspersed Denuded Zone under buckthorn, Locations from our 100 Sites for 100 years project (manager Lauren Umek, Alex Ulp GIS assistant).
December 12, 2011
In The Kingdom of Decay: How a Motley Team of Subterranean Dwellers Ransacks the Dead and Liberates Nutrients for the Living
The recently dead rot much like money accumulates in banks (until recently, at least), only, of course, in reverse. A sage great-great-ancestor who had, for instance, set aside a few shillings for a distant descendant would, through the plausible alchemy of compound interest, have made that great-great-offspring a wealthy person indeed. In contrast, after death a body-heft of matter accumulated over the course of a lifetime is hustled away, rapidly at first, but leaving increasingly minute scraps of the carcass to linger on nature’s banquet table. It is as if Zeno had not shot an arrow but instead had ghoulishly slobbered down upon the departed, progressively diminishing the cadavers but never quite finishing his noisome meal. The soils of the world contain in tiny form, scraps of formerly living things going back many thousands of years. Perhaps these are the ghosts we sense when we are alone in the woods.
Before you rake away the final leaves of the autumn season, hold one up to the early winter light. Those patches where you see sky rather than leaf are the parts that had been consumed live, nibbled away by insects or occasionally browsed by mammals. But you may have to pick up several leaves to see any consumption at all! The eating of live plant material is rarer than one might suspect. It is almost as if most creatures, unlike us of course, have the decency to wait for other beings to die before they consume them. Ecologists have wondered why this is the case, asking in one formulation of the problem “why is the world green?” At the peak of the summer season the world is mysteriously like a large bowl of uneaten salad. The world it turns out is green for many reasons but a compelling one is that plants generally defend themselves quite resourcefully. The thorn upon the rose provides more than a pretty metaphor – this shrub knows exactly what to do with its aggressive pricks. And if one can neither run nor hide nor protrude a thorn, you might manufacture chemical weapons. Crush a cherry laurel leaf in your hand, wait a moment or so, and then inhale that aroma like toasted almond. It’s hydrogen cyanide, of course. “Don’t fuck with me” is one of the shrubbery’s less lovely messages.
Gravity tugs upon the dead. Those things not already in the soil when death arrests them tend soilwards upon their demise. If this were a world where the dead remained unconsumed an unwholesome detrital pile would have accumulated upon the bottom of ancient seas until the world’s usable matter had been exhausted and life on earth would have faltered. The dead must be moved along for the living to keep moving at all. Why this must be so is pretty obvious but precisely how post-mortem remains get disarticulated and converted into forms usable for the living is still being investigated. Professionally, I am a student of death and decay, which is an accurate way of saying that I am a student of life. The world is as brown as it is green.
From this point on I will primarily consider the decay of plant material since this comprises the bulk of terrestrial biomass. Concentrating on the breakdown of leaves rather than bodies makes the story less gruesome but the processes are much the same. The consumption of the formerly living and the transmutation of organic into inorganic constituents is the ecological business of a diverse community of saprophytic organisms (etymologically derived from sapro = putrid, and phyte = plant) and of an accompanying host of small animals that feed directly upon the decay or that nibble on the saprophytic microbes involved in decomposition. The outcome of all this caliginous toil is the liberation of carbon, nitrogen, phosphorus, and elements otherwise trapped in the death’s charmless chambers. The carbon burbles through the soil and back into the atmosphere, the nutrients spill into the soil and are scrambled over by microbes and plants all obeying life’s blind will to amplify.
Earthworms, millipedes, woodlice and so forth fragment dead leaves, breaking them into smaller pieces and exposing fresh surfaces to colonization by microorganisms. Earthworms, like mobile and mucousy tubes of toothpaste open on both ends, squirt their way through the world’s putrefaction. What they squeeze out may not be minty fresh but it has its own charisma. An earthworm’s body surface, its internal workings, and its copious soil-full egesta glisten with a snotty discharge that microbes simply die for. Or rather live for since these easily degraded substances prime the decomposer microbes whose micro-feeding frenzy continues the assault on dead organic matter. Earthworms inside and out are maestros of putrescence. In their poetic moments earthwormologists (a freshly coined term) have referred to their beast of interest as “Prince Charming”, its mucus as a “Kiss”, and those microbes that get whipped up into a digestive frenzy as “sleeping beauties”.
Fungi and bacteria are royalty in the kingdom of decay. They satisfy their nutritional needs by regally exuding extracellular enzymes upon their putrescent foodstuff and absorbing the rot. The soil is a trickle down economy of the most literal form. The bulk of global decomposition is performed in this macerating way. A bacterium, from the perspective of putrescence, is a single-celled sack of carnage constrained within a robust peptidoglycan wall. Not only can they break down some extraordinarily robust materials (including cement) some produce powerful fungicides and thus dispatch and then consume the competition. If it was not for one small design limitation this world of ours would host little other than bacteria consuming bacteria. A scientific madman indeed would be he who genetically engineered tiny legs for bacteria. For this is their structural drawback – bacteria are relatively immobile, and like sea anemones or corals they wait for their food to come to them or for some biddable creature to transport them to their comestibles. For this reason a majority of bacteria cells in the soil are physiologically inactive, waiting, waiting, waiting for some moist dead thing to enliven them and unleash a digestive maelstrom.
One should not be deceived by the daintiness of an intermittently protruding mushroom or toadstool. These are merely wardrobe malfunctions in the great show of mouldering – unseemly exposed tips of a grand underground organism whose digestively capable filaments (called hyphae) can extend as a network over many miles… yes, miles. Fungi, in fact, are celebrated among the world’s largest organisms. The strategy is that the organism can glean a portion of the nutritional requirement in one place and other portions elsewhere and in theory can distribute the ambrosial broth across the entire cytoplasmic web. Their sheer size has led to debate about what precisely constitutes an individual organism (genetic identity is clearly not enough) but for our purposes the significant point is that more or less everywhere below us a fungus toils, relieving the dead of the elements they have little use for anymore.
The community of soil animals supported by decay is profligately diverse – enigmatically diverse in fact since many occupy themselves with the consumption of similar morsels. The application of one of ecology’s few implacable laws, competitive exclusion, should dictate that this richness be diminished. There are predators down there of course – monstrous feeders, some of which are sheathed in chitin and furnished with pincers beyond the extravagance of ordinary phantasms. On predators’ menus: nematodes, protozoa, rotifers, mites, springtails, diplurans, termites, woodlice, and amphipods. All with their distinct gustatory charms one supposes; no-one is sharing recipes. The cupboards of non-predatory soil animals are rarely bare and you’d not go hungry down there as long as your appetite is whetted for fungus or bacteria for all your days. And this is the enigma of soil diversity therefore: so many animals live on the same diet with little specialization of feeding habits. How can this be so?
Energetically, soil animals, other than worms, directly contribute little to the decay of the dead. Functionally, however, they are tremendously important. The problem with the unrefracted dead, as you will recall, is that the dead harbor essential matter required by the living; the problem with microbes is that as quickly as they liberate these essential ingredients they immobilize them again in their own burgeoning biomass. Soil animals disrupt and facilitate in equal measure. They help things along by champing down upon microbes liberating their nourishing juices in a form available to plants. Now, one may wonder why consumption by the animals doesn’t simply lead to their accumulation in the biomass of those microbivores. If this were the case it might make it difficult for plants to get the elements necessary for their growth – all in all an unfortunate thing since it is primarily dead plant material keeping the whole thing going. Here’s what happens then. The composition of microbial cytoplasm is different from that of soil animals in one important respect. There is more nitrogen relative to the concentrations of carbon in microorganisms. Animals feed upon microbes to get at get their carbon fix and in doing so take in more nitrogen that they can process. To deal with this animals excrete that excess. The bottom line: the piss of armies of small animals sustains this green earth. Nitrogen gets into soils in other ways, of course, and soil critters perform other functions, but it is hard to overestimate the influence of tiny soil animals – mites and springtails (primitive wingless insect-like critters) – in orchestrating rot.
The nitrogen and all the other essential soil nutrients liberated during the decomposition of the dead ensures that plants can respond to the sun’s energy and live for a while, to sustain the living of others, such as us, for a while, to animate matter for a while, and all that while preparing matter for its lengthy sojourn in the kingdom of decay.
In its broad strokes the story of decay has been known for some time. Darwin famously contributed to that understanding. His book The Formation of Vegetable Mould through the Action of Worms, with Observations on their Habits (1881) culminated a lifelong interest in worms. Nothing escaped his attention: the density of worms in soil, their taste preferences, and even their unusual sexual habits (their “passion” he said, “is strong enough to overcome for a time their dread of light.”) In particular though he meticulously quantified the rate at which worms convert leaves into soil, thereby increasing the fertility of the soil. In the intervening century and a third the details have been worked out. The critical role of tiny soil animals in determining the rates of decay and in liberating soil nutrients emerged from the work of the last generation of researchers. I have contributed in a very modest way to this research literature in the last couple of decades.
Big questions remain unanswered. What might the significance be of the loss of below-ground diversity for the functioning of ecosystems? Can soil communities be restored if they are damaged? Can individual plant species manipulate soil decomposers to ensure a rate of decay that favors their own growth? What are the implications of global change for decomposition? If decomposition rates increase in bogs or in the tundra as they are expected to in most models of climate change, will the additional carbon released into the atmosphere in turn exacerbate global temperature increases (some folks speculate that soil carbon release will contribute to the breaching of a critical transition).
Perhaps it is just “cowards who die many times before their deaths”, but the matter that constitutes each and every one of us has experienced death so often that we should all be able to face our end languidly. We are all shuffling along the waiting line into the Kingdom of Decay. The workings of the upper five centimeters of the Earth’s surface may repay the considerable effort it takes to learn about it. The payoff may be felt not only in contemplating our collective environmental future but in contemplating our personal demise.
All photos by Liam Heneghan except photo of soil mite (Oppiella nova) by Claire Gilmore and Liam Heneghan.
October 31, 2011
Airplanes, Asparagus, and Mirrors, Oh My!
by Meghan D. Rosen
Last month, I asked you to submit a science-y question that you'd like to have answered in simple terms. You asked about light, and mirrors, and spices and space— I was delighted by the scope of the questions posed.
This month my fellow SciCom classmates tackled three. Steve Tung glides through the mechanics of flight; Beth Mole spouts off about asparagus pee; and Tanya Lewis reflects on mirrors.
If you have more burning science questions, just post them in the comments. We'll be back next month with more answers.
And if you don't have a science question, but do have a thought or a picture to share, check out www.sharingamomentofscience.tumblr.com
How can an airplane fly upside down?
Daredevil pilots execute stunning aerobatic maneuvers― loops, rolls, spins, and more― sometimes while upside down for a long time. How do they do it? It might seem that the force keeping a right-side-up plane aloft would push a flipped plane down.
The trick is how the plane is angled in the air. Pilots can adjust the tilt to lift the plane, even when it is upside down.
You may have stuck your hand outside of a moving car and felt the rushing air push it up or down. Tilt your hand more, and that force is stronger. Turn your hand upside down and it still happens, though it might not be as powerful.
Plane wings, flipped or not, work the same way― tilt them up more, and air lifts the plane more. There are drawbacks and limitations, however. Higher angles cause more drag, slowing the plane. Tilt too far and the plane loses its aerodynamic properties and falls like a rock.
But not all airplanes can fly upside down. Some depend on gravity to fuel the engines; some would break under the different stresses of flying inverted. Stunt airplanes use specially designed wings, bodies, and engines to be more agile, more durable, and more versatile.
Steve Tung once dreamed of designing airplanes and rockets. He now dreams of pithy, memorable prose. (He received a bachelor's degree in mechanical engineering with a concentration in fluid mechanics from Cornell University) Twitter: @SteveTungWrites
Many years ago Mel Brooks asked the one question which had haunted him all these years: "Why, after I eat a few stalks of asparagus, does my pee pee smell so funny?"
It wasn’t until recently that scientists started to unravel this odorous riddle. The answer lies with both the whizzer and the whiffer.
When we digest asparagus, its sulfur-containing compounds can break down into stinky subunits that strike as early as 15 minutes after eating. Although the culprit behind the smelly bathroom visits hasn’t been caught, the most likely suspect is methanethiol.
But in bathroom exit surveys, only some asparagus eaters say they can smell the excreted evidence.
In 2010, scientists went digging through a database that linked genetic data with survey data including answers to questions like ‘Have you ever noticed that your pee smells funny after you eat asparagus?’
They found that people who have particular DNA changes around a set of genes responsible for olfactory receptors—molecular smell detectors in your nose—are more likely to be able to smell asparagus pee.
So for those that can’t smell asparagus pee, it might not mean that you can’t make it.
Last year a different set of scientists waved pee vials under people’s snouts to sniff out who could make asparagus pee and who could smell it.
They confirmed that some schnozzles can’t smell asparagus evidence. But they also found that some people don’t seem to make it either, at least not in detectable amounts.
Since scientists haven’t pinned down the stinky subunit responsible, they can’t say for certain if it’s not there at all or just at really low levels that we can’t smell.
For now, it seems likely that our abilities to make and smell asparagus pee probably exist on sliding scales, and whether or not you can smell it seems unrelated to whether or not you can make it—so, continue to ponder in the potty.
Beth Mole earned her PhD in microbiology at UNC Chapel Hill studying a potato pathogen and did postdoctoral research on antibiotic resistant bugs at UNC's Eshelman School of Pharmacy. She started writing about science in 2008 for Endeavors magazine and is currently enrolled in the science communication program at UC Santa Cruz.
When you look in the mirror and point your right arm out to the side, your reflection in the mirror points its left arm. But when you point up above your head, your reflection doesn’t point to its feet. Even if you lie on your side and point your arm out, the mirror seems to “know” to switch which arm your reflection points, even though that’s now up or down relative to the ground.
What’s going on? Actually, mirrors don’t reverse things left-and-right, they reverse them in-and-out. Imagine casting a rubber mold of yourself, then turning the mold inside-out. Your reflection would face you, but your arms would appear to switch sides.
Another way to think about it is this: write something on a piece of semi-transparent paper and hold it up to the mirror. The reflected writing is, of course, a mirror image. But now turn the paper around so the writing faces you, and look at the reflection in the mirror. The writing is the right way round again. The reflection is like a stamp, making a “light print” of the writing on the page.
Tanya is a graduate student in the science communication program at UC Santa Cruz. She is an incurable science geek with a penchant for storytelling. She can be reached at tanlewis (at) gmail (dot) com or on twitter @tanyalewis314
October 10, 2011
The Quintessential North American Reptile
Article and photos by Wayne Ferrier
I had that unmistakable feeling of being watched. It was a sunny autumn afternoon, and I was helping my father dig up an old drainage ditch at their Central Pennsylvania home. I was pretty far down in the ditch, pitching gravel over my shoulder onto the bank above me. I paused and looked around.
It didn’t take long to find out who was spying on me. A common garter snake, Thamnophis sirtalis, lay curled up on the bank, watching me with an intensity that I would have to say bordered on fascination.
A curious thing about the encounter was that the snake was half buried in gravel. She was too enchanted watching me work to worry much about being buried in stones.
No doubt I was excavating a favorite hunting ground. Digging up and replacing the old drainage system, I was uncovering a lot of salamanders (Eurycea bislineata), most certainly a staple in this particular garter snake’s diet.
I do not know how long she had been there, inches from my head. For a moment we remained motionless, eyeing one another, but eventually she lost her nerve and darted off towards the stone wall. Slick yellow and brown lateral stripes proved to be excellent camouflage gliding through a background of burnt grass and autumn leaves, and she quickly disappeared from view.
I put down my shovel and took a coffee break. This encounter sure brought back memories. There have always been garter snakes in my parent’s yard, and as preschoolers we used to play here at this very same wall. My friends and I would often get that eerie feeling that someone was watching us. Often it turned out to be one particularly curious garter, and in our ignorance we would chase him back to the stone wall screaming, wailing, and hurling rocks. But within an hour he would be back to continue his espionage, we’d get that feeling and in panic try to unload more punishment on the poor creature.
Actually he was too fast for us and we never did catch him. What makes garters act like that? Most snakes go out of their way to avoid people, so one that actually chooses to live amongst us, hangout during daylight hours, and even engage in people watching seems to be rather oddball behavior for a snake.
A partial explanation is that garter snakes rely heavily on their vision when they hunt. This does not mean, however, that their vision is that great—a garter snake cannot see very well unless what it is looking at is moving. If the prey stays perfectly still, the snake may not detect it. The slightest movement, however, can give the prey away. Garter snakes are hypersensitive to the very slightest twitch. So things that move fascinate them.
In addition, it seems that garter snakes like to eat all the time, at least compared to many snakes that spend long periods between meals, and the thought of food sometimes overpowers their flight mechanism. I once caught and temporarily kept a fully mature garter at my country home in upstate New York and had her feeding out of my hand within twenty minutes. I used to play a game with her, where I’d wiggle my finger as if it were a worm and she’d get all excited, flick her tongue and start pursuit; that is until she figured out that my finger was attached to me and that ended the game; and she wouldn’t fall for that trick anymore.
She had no interest in leaving captivity until the autumn leaves started falling, and she knew she had to go hibernate. That’s when it was her time to put one over on me. In order to feed her I had to open the lid at the top of her terrarium. If she wanted fed, which was like several times a day, she’d rise, balancing her belly on the glass and propping up on the tip of her tail. I’d hoover an earthworm just above her head. She’d check it out for a few minutes, and then quickly snatch it from my hand. One October day she took a particularly long time making the strike and I was getting bored, eventually my attention started to drift; she took the opportunity and darted past me and out of the terrarium! Was that planned? Sure seemed like it, but I’ll leave that to evolutionary psychology to debate. I will say this though, she tried the trick several more times, but when I wouldn’t fall for it again, the ruse ended.
Garter snakes seem to be smart. Most snakes detect prey primarily by olfaction using their Jacobson’s organs. Pit vipers, such as rattlesnakes and copperheads, also have heat sensors. Compared to these snakes, garters are more visual. If movement is sensed overhead (e.g., a hawk), it is to be avoided; but if the movement is perceived at or below eye level (e.g., a frog), it may be pursued, analyzed, and perhaps eaten—unless it’s my finger. My pet snake once accidently bit my finger during a sloppy strike aimed at a worm. She knew immediately she had missed and hit the wrong target. I may be over anthropomorphizing a bit, but she actually appeared to be embarrassed, genuinely sorry, and ran and hid. When she saw I wasn’t the least perturbed by the mishap, she came back and finished the worm. She never missed after that and I was never bitten again.
Back to the story of digging the ditch and the garter at the stone wall: I’m sure watching me work, that it didn’t take her long to figure out that I was way too big to eat. Down in the ditch below her, I wasn’t particularly threatening, but when I noticed her and stood straight up, it was a different matter, and that was when she decided that the show was over, and maybe the best thing to do was skedaddle.
Hands down the garter snake is the dominant reptile in North America. It has the widest range and is the most common reptile found on the continent. The genus Thamnophis (garter and ribbon snakes) can be found anywhere from southern Alaska to the Maritime Provinces. The common garter, Thamnophis sirtalis, has the most northerly range of all North American reptiles, going as far as the border between Alberta and the Northwest Territories. Every one of the lower 48 states has at least one species of Thamnophis, and a few species live as far south as Costa Rica. Mountains, plains, deserts, swamps, even cities—it doesn’t matter—as long as there is suitable food around, garter snakes can usually be found.
Ideal habitats can accommodate as many as 10 snakes per acre, and several species can coexist in the same area by hunting different prey and being active at different temperatures. Add to this their habits of frequent feeding and daytime activity, and it is not surprising that a garter snake is the first—and perhaps only—snake that many North Americans may ever encounter in the wild.
Most Thamnophis are opportunistic, varying prey according to what is available. Earthworms are their favorite food, amphibians are their second choice. Sometimes they eat smaller snakes, and may even resort to cannibalism. Insects are consumed when abundant in the fall. Occasionally Thamnophis kill and eat rodents (e.g., voles, mice, chipmunks) or nestling birds. Their versatile diet may also include fish and crustaceans, and even carrion.
Perhaps the most extreme example of the garter snake’s love for food is their taste for dangerous delicacies. In some high-end sushi restaurants you may order fugu, a dish prepared from the extremely poisonous pufferfish or blowfish. A skilled chef knows how to prepare the dish by removing the poison; and if you buy it you must have a lot of trust in the skill and integrity of the chef. If a garter snake were to be fed a bad batch of fugu, it might not notice. The snakes have evolved resistance to blowfish poison (tetrodotoxin), because they regularly eat rough-skinned newts (Taricha granulosa), which also secrete the toxin. The newts and snakes have been engaged in an evolutionary arms race, and the last time I checked, it seems that the newts are losing. They just can’t make themselves toxic enough to dissuade the garters from eating them.
Generally, Thamnophis capture prey with their mouths and swallow it alive to slowly suffocate in the snake’s digestive tract. It is thought that the saliva of some species of Thamnophis may be medley toxic. Some garters may resort to constriction when subduing rodents—western plains garters have been seen doing this.
When a garter snake first ventures out to hunt, like any other snake, it flicks its forked tongue trying to locate prey. A young snake analyzes the scent substances give off by potential prey before it will strike, but an experienced snake relies more on visual clues. This skill is especially useful when hunting frogs and toads. Normally diurnal, many species prowl at night during the anuran breeding season. At this time the smell of frogs may be ubiquitous and relying on scent alone would not be productive for the snake. So it lies in wait and when it sees a frog or toad move it strikes. A snake may be right on top of a frog, but as long as the frog remains motionless, the frog will go undetected. Now the frogs are well aware of this phenomena—they themselves can’t see their own prey very well unless it is moving—so when the snake is around they keep still. Western ribbon snakes (Thamnophis proximus) have been seen solving this problem by systematically striking the vegetation, obviously smelling the frogs but unable to see them. Striking the grass disturbs the frogs to the point that they lose their nerve and make a break for it. The snakes were actually flushing the frogs out.
Many Thamnophis species are as comfortable in the water as they are on land. Sometimes they maneuver through the shoreline brush or climb trees and overlook the water from that vantage point. Ribbon snakes are frequently found among the reeds and cattails in shallow water—an ideal ecological niche, where land, water, and sky come together. The reeds offer good cover and usually abound with insects, fish, frogs, snails, and leeches. But the reeds also offer death. Those who like to eat aquatic serpents also hang out in the reeds; wading birds, predatory fish, and ophiophagous snakes (cottonmouths for example), are among the ribbon snakes worst enemies. When a ribbon snake comes across the scent of a predatory snake, it leaves the area immediately. If pursued, ribbons will sometimes dive into the water and submerge.
Many Thamnophis are generalists making use of a variety of habitats and prey.
Generalists are more versatile and less susceptible to starvation if one food source is scarce and this is a primary reason for the success of this reptile. But they have numerous enemies—mostly other snakes, large birds, and mammals such as opossum, fox, mink, and skunk. Young snakes may also have to contend with large toads and frogs. When faced with danger, the snake either tries to flee or conceal itself. Which one may depend partly on the weather. On chilly days a garter snake cannot move very fast and may not even attempt to flee, knowing full well that it would probably lose the chase. On warmer days a garter may even become aggressive but may quickly switch to passive measures if touched. Thus aggressiveness may be only a bluff—but don’t take this for granted! Sometimes if handled they emit a strong musky odor, which makes you want to put them down.
In the autumn and early spring you often encounter a garter snake en route to and from its winter hibernacula. In some areas this may be as far as 15km from where you find them. They often find other snakes to hibernate with. I’ve learned a lot about snakes since my original experiences with garter snakes when I was a kid. My autumn encounter with the spy might have been a strange experience had it been any other kind of snake, but it was a Thamnophis—the quintessential North American reptile. It was not so strange, there are a lot of them here. Perhaps the old stone wall has been a Thamnophis hibernaculum since I was a kid. I finished my coffee and my reminiscing and went back to work in the ditch and awaited her return. But not this time, the last I saw of her had been her stripes blending into the leaves and grass, as she slipped into the recesses of the stonewall.
October 03, 2011
Ask a Scientist
by Meghan D. Rosen
Each year, the Science Communication program at the University of California, Santa Cruz accepts 10 students and, for nine writing-intensive months, teaches them how to become better science journalists. This year, I am happy to say that I am one of the 10. My nine fellow classmates come from a wide variety of scientific backgrounds (from marine biology to mechanical engineering to neuroscience). We have a self-proclaimed ‘fish guts scientist,’ a potato pathologist, a reality TV star with survival skills (from the Discovery Channel’s, ‘The Colony’), a raptor surveyor (aka ‘hawk lady’), and an agricultural writer who grew up on a dairy farm.
It’s a diverse bunch of people, with a broad set of experiences, and the best part is: they all like to talk about science. I think I’m in heaven.
One of our recent assignments was to answer a classmate’s question that was about (or loosely connected to) our field of study. The constraints: we couldn’t use any jargon in the answer, it had to be clear to a non-scientist, and we had to do it in 200 words or less. Here are some of the question ideas we kicked around: Why does a golf ball have dimples? How does a submarine judge depth? Why do tarantulas migrate? How does the brain form memories?
I liked the challenge – answer a could-be complicated question with clarity–, and the idea of directly connecting scientists with people looking for answers to life’s curiosities.
So, this month, I’m trying an experiment for the readers of 3QD. Do you have any burning science-based questions that you’d like answered? Do you want to know how something works? Is there anything that you wish was just explained more clearly? If so, leave a question in the comments. I’ll solicit answers from my classmates, and get back to you next month. To help us get us started, I’ve included my own question and answer below (and yes, I stuck to the word limit –I even had two words to spare!).
Question: Why are doctors now recommending fewer screenings for breast cancer?
The idea behind breast cancer screening is simple: the sooner you find a lump, the sooner you can fight it. Until two years ago, the standard for care was frequent screenings and aggressive treatment. We were constantly on guard (yearly mammograms) and ever ready to wage surgical war (lump or breast removal). Intuitively, it made sense – root out the cancerous seed before it sprouts. Early detection should save lives, right? Not necessarily.
In 2009, an independent panel of experts appointed by the U.S. Department of Health and Human Services found that mammograms didn’t actually cut the breast cancer death rate by much: only about 15 percent. But we were screening more women than ever. So why were so many people still dying?
The problem isn’t detection: mammograms are pretty good at pinpointing the location of an abnormal cell cluster in the breast. But not all abnormal cells are cancerous, and mammograms can’t tell the harmless ones from the dangerous ones. In other words, a lump is not a lump is not a lump.
Today, doctors are divided. Some think excessive screening forces thousands of women to undergo unnecessary surgeries. Others think one life saved is worth the cost.
September 05, 2011
A Gut Feeling
by Meghan Rosen
Are you in the market for a healthy, stable, long-term relationship? Turns out you may not have to look further than your gut. Or, more specifically, the trillions of microbes that inhabit your gut. Yes, you and a few trillion life-partners are currently involved in a devoted, mutually beneficial relationship that has endured the test of time. Don’t worry though, they’ve already met your mother.
We’re exposed first to our mother’s microbial flora during birth; these are the pioneering settlers of our gastro-intestinal (GI) tract. In the following weeks our gut becomes fully colonized with a diverse array of bacteria, viruses, and fungi. Although our gut microbes are generally about an order of magnitude smaller in size than human cells, when counted by the trillions, they add up.
In fact, these intestinal interlopers (along with their fellow skin, genital and glandular neighbors) can account for up 2% of a person’s total body mass). That’s right, a 175lb man could be carrying more than 3 pounds of microbes in and on his body. Most of these microbial tenants, however, are crowded together in the lower part of his large intestine: the colon.
If we travel up the GI tract a bit and inspect the contents of the small intestine, the concentration of microbes drops nearly a billion-fold; compared to the colon, it’s practically germ free. (Although these germs are harmless when living in the gut, if the intestinal lining is breached, they won’t pass up an opportunity to spread to and wreak havoc in other areas of the body.)
While it’s easy to see the lifestyle advantages for a colon-dwelling bacterium (warm food, cozy housing, nearby relatives), the benefits and health implications for humans are not as well understood. Do we gain anything from toting around these vast microbial populations or are we merely a free meal ticket?
We know from studies in mice that gut microbes can influence health and metabolism. In fact, mice that have been delivered by cesarean section into sterile environments (and therefore lack the usual complement of intestinal microflora) are not as healthy as siblings that are birthed normally. These germ-free rodents have defective GI and immune systems compared to their microbe-ridden brothers and sisters.
While it’s clear that an animal’s gut microbes are a valuable part of a healthy intestine, their role in human metabolism and body weight remains ambiguous. We do know, however, that these microbes can enhance digestion. Normally, anything a mammal cannot digest passes through the GI tract unscathed; the energy present in this food is ‘locked up’, and therefore excreted. Obese mice, however, hold a few extra keys to calorie consumption.
The gut microbes of obese mice contain a vast array of genes that encode uncommon digestive enzymes. These enzymes help break down an expanded set of caloric compounds, and allow the mice to extract nutrients from otherwise indigestible food substances. Consequently, obese mice have fewer calories remaining in their feces than their slimmer relatives.
If obese mice have a different cohort of intestinal bacteria with super-digestive abilities, is the same true of obese humans? Is there a link between different body types and different gut microbial communities? Researchers at the Center for Genome Sciences at the Washington University School of Medicine in St. Louis, Missouri are attempting to answer these questions by comparing the identity of these gut community members, or the ‘gut microbiome’, in groups of differently sized people. Jeffrey Gordon’s lab examined fecal samples from 54 sets of adult female twins and sequenced the DNA of each and every microbe that passed through the volunteers’ intestines.
Although the majority of the twins selected for the study were identical, nearly every pair of sisters had one drastic physical difference: their body mass index. Gordon’s team of researchers specifically chose twin sets with one obese and one lean member to help understand the role of the gut microbiome in human obesity.
Although most gut microbial genes were shared between all volunteers, a significant portion of microbial genes varied from person-to-person, particularly among the obese and the lean. For instance, the obese member of a twin set generally had a gut microbiome loaded with extra genes involved in fat, carbohydrate, and protein metabolism. Are these mighty microbial metabolizers so efficient at squeezing calories from food that they actually contribute to their landlord’s obesity? Maybe, but we can’t say for sure just yet.
We do know that our gut is a kind of multi-species digestive super-organ, and that changes in the intestinal microbiome are associated with vastly different body types. In fact, Gordon’s lab has shown that you can actually fatten up a lean mouse by feeding it microbes from the guts of an obese peer. Although it’s still unclear exactly how the organisms in our intestines contribute to obesity, this research provides something for follow-up studies to chew on. Is it possible then to lose weight by dining on the gut bacteria of a skinny friend? Perhaps. Just don’t try it at home.
1. Bajzer, M and Seeley, RJ (2006, December). Obesity and gut flora. Nature, 444, 1009-1010.
2. Hord, N. G. (2008). Eukaryotic-Microbiota crosstalk: Potential mechanisms for health benefits of prebiotics and probiotics. Annual Review of Nutrition, 28, 215-31.
3. Ley, R. E., Turnbaugh, P. J., Klein, S., & Gordon, J. I. (2006). Microbial ecology: Human gut microbes associated with obesity. Nature, 444(7122), 1022-3.
4. Othman, M., Agüero, R., & Lin, H. C. (2008). Alterations in intestinal microbial flora and human disease. Current Opinion in Gastroenterology, 24(1), 11-6.
5. Sekirov, I, and Finlay BB (2006, July). Human and microbe: United we stand. Nature, 12(7), 736-737.
6. Turnbaugh, P. J., Hamady, M., Yatsunenko, T., Cantarel, B. L., Duncan, A., Ley, R. E., et al. (2009). A core gut microbiome in obese and lean twins. Nature, 457(7228), 480-4.
7. Turnbaugh, P. J., Ley, R. E., Mahowald, M. A., Magrini, V., Mardis, E. R., & Gordon, J. I. (2006). An obesity-associated gut microbiome with increased capacity for energy harvest. Nature, 444(7122), 1027-31.
August 22, 2011
The Existential Equation – The Irish Pre-famine Population and the Dilemmas of a 7 billion person world
The Irish Famine of 1846 killed more the 1,000,000 people, but it killed poor devils only. --Karl Marx, Capital Volume 1 (1867)
Behold the potato chip! It’s the perfect substrate for immersing in delicious oils, an adroit vehicle for conveying toothsome flavors to the mouth. If one eschews the oils and the suspicious flavorings, the potato is almost a complete meal in itself. Mashed along with a little buttermilk it fueled, as is claimed with some hyperbole of course, the construction of a British empire. Viewed with a squint, it is as if the Irishman with spade in hand was the subterranean potato tuber’s extended phenotype – another starchy being anxiously grubbing back into the dirt. Hundreds of thousands of potato-fed and buttery Irishmen left for Britain during the 19th Century to find employment as navvies and there they dug ditches, canals, and built a railroad system. And during and after the Great Potato Famine (1845-1849) millions more left for North America and elsewhere.
For me this is personal. Because of the enormous productivity of potato – an acre of potato producing more calories than thrice that of grain – I am now living in the US. I am, if my assessment is correct, the very last of the post-potato-famine migrant from Ireland. As soon as I left (in 1994), the exiles commenced their return, and though migration out of Ireland has begun again it is no longer, it seems to me, the same demographic pattern initiated by the failure of the potato crop.
My principle concern here is not the potato nor the Irishman nor the empire: I am interested in revisiting the demographic implications of events surrounding the Irish Potato Famine; examining the way in which economic and social historians have assessed the population growth running up to the famine before the horrible consequences of the potato failure unfolded. Let me make my main point here: nothing could be seemingly simpler to come to grips with than the pattern of a population growth in the century leading to Irish famine, and the increasing reliance of the poor on a single crop and the subsequent crash of the population after the failure of the crop. And yet despite the beguiling but horrifying simplicity of the pattern almost no aspect of the story is as easy to explain as it may seem. To keep this post to modest length I am discussing only the debates over causes of population growth before the famine here and will post follow up comments on my blog in the coming months about the population disaster that followed the potato failure – another complicated story.
Before assessing the pre-famine population patterns a word or two on the potato itself. The potato (Solanum tuberosum) is an annual herbaceous dicotyledonous plant that produces a carbohydrate- and protein-rich edible tuber (underground storage stem). As an annuals herb, the potato has much in common with several weedy species. The plant is a member of the family Solanaceae and thus is related to several other cultivated plants: tomatoes and peppers, for instance. Indeed, an Irish person outside a pub with a potato chip (or “crisp” as it is called in Ireland) in one hand, and a cigarette in the other is enjoying the dubious benefits of two members of the Solanaceae. Potatoes were first domesticated in highlands of Bolivia and Peru and were introduced into Europe by Spanish explorers in the late sixteenth century. The potato made the return journey to the New World from Europe in 1791 being supposedly introduced to the US from Ireland.
The climatic conditions that make Ireland a slight misery to live in permit potatoes to thrive – cool temperatures, overcast skies and perpetually moist soils are ideal for the crop.
The potato follows rice, wheat, and corn, in supplying calories to the human population. Besides being scrumptious, a potato supplies a good balance of the essential amino acids. They also are a source of B vitamins and vitamin D and C. Potato also contains a host of micronutrients, most of which are found closely below the skin – I encouraged you all to eat your spuds with their jackets on. If you peel them the skins can be fed to the pig that you might be fattening up to sell for rent (or at least in pre-famine Ireland this would be the recommendation and was the standard practice). The high productivity of potatoes in tiny land-spaces contributed to its rapid adoption as part of the Irish agricultural practice and diet. At a time of rising populations the potato was the perfect crop – the higher the population the greater the dependence on the potato and the potato in turn facilitated a further rise in population. Both species contributed to each other’s success. And the collapse of one lead to the collapse of the other.
There is little in dispute about the proximate cause of the Irish post-famine population decline – the almost exclusive dependence of a relatively vast Irish population on a single crop whose failure resulted in starvation, death, and emigration of the Irish. Beyond these horrifying and indisputable generalizations there is little agreement on other issues associated with the Great Famine. The exact contribution of flawed land policy and landlordism in the run up to the famine, the degree to which the political response contributed to the exacerbating or relief the famine, even, to some extent, the estimates of death (ranging from half a million to well over 1 million), are all still contentiously debated. The rise of the Irish population before the Great Famine, the main concern of this little piece, has also attracted some scholarly attention, and though the pattern seems comparatively straightforward, the theories explaining the demographic situation are also contentious.
So, the population of Ireland in the year 1800 was 3.8 million. The data is not completely reliable, but the patterns are very clear. At the eve of the famine it had risen to incredible 8.1 million! The accompanying graph, based upon the census returns of 1821, ’31 and ’41 illustrates just how rapid this was (I reconstructed these based upon the census returns for Ireland that can be found at www.histpop.org.) Irish growth rates were in fact the highest in Europe at the time, though just before the famine the growth rates seemed to have declined to 0.9% per annum.  It was as if the population bow had been drawn to its limits and the arrow of disaster was poised for release.
This rapid period of population growth was not just an Irish phenomenon, it had occurred throughout Europe though at comparatively slower rates. To represent such growth mathematically requires little in the way of computational finesse: populations grow when birth rates exceed death rates. Despite the delicious tractability of the basic population model, after all it can be expressed as ∆P=B-D (change in population = births – deaths), the genius of the human is to transform the simple factors B and D into everything that gives our lives meaning. All that’s beautiful and terrifying is embedded in this most existential of equations. Population grows when any combination of events result in birth being more prevalent than death; so even if mortality rates increase as long as more kids are born into the misery, populations continue to grow.
So what was going on it Europe in the 18th and 19th centuries that resulted in rapid population increases? This period of rapid growth, though it was not the first period of growth in human population history, is significant in being the one that marked a beginning of the modern population spurt whose outcome is today’s global population. Between the end of the 18th century when the global population was 1 billion and today the world’s population ballooned to 7 billion. The most plausible hypothesis concerning the origins of this contemporary growth spasm is that during the period mortality rates declined, and though in some cases birth rates also declined mortality rates, crucially, declined at a faster rate that the birth rate decline. This difference between mortality and fertility opened a “gap” between births and deaths and the population as a consequence increased. To be clear, postponing death, which is inarguably happy news, has consequences.
The reduced prevalence of infectious diseases was a main contributor to the decline in mortality. Contributing to the decline in infectious diseases were improved diets, sanitary reform and an altered relation between infectious agents and the human host. Many demographers are adamant that the declining mortality during this period was not related to medical genius. Medical knowledge at the time did not extend to comprehensive knowledge of major infections killers of the day. There is evidence for a spontaneous decline in some of the historical mass killers – scarlet fever, for instance – but this is not enough to explain the sharp decline of mortality.
The evidence of a role for improved nutrition in contributing to mortality decline is solidly founded. The better fed and healthier European of the 19th century derived their good fortune from newly emerged agricultural technologies, ones based upon better conservation of soil fertility and more sophisticated ecological knowledge of crops diversity. The diversification of crops was important – besides averting the sort of disaster awaiting Ireland in the 1840s, new crops in Europe ensured a reliable supply of food year round. The potato was the absolute king of the root crops, but turnips, beets, carrots and parsnips were planted. These roots also provide feed from livestock in the winter, increasing the amount of meat available for consumption or sale. It seems a little obvious to underscore it, but improved food quality and greater availability of calories are crucial to sustaining a population – and even if these factors don’t inexorably lead to population growth, they are necessary for it. More positively, the role of food quality and availability on reduced mortality rates contributes to population growth as long as birth rates are relatively unaffected.
Now, speculation about the factors contributing to the growth of populations during the 18th century were developed primarily through a detailed examination of the records of births and deaths in England and Wales, but do the patterns hold for the remainder of Europe? Peter Razzell, a noted population historian, remarked that the Irish population lagged for almost a century after the potato became a commonplace crop in that country and thus cautions us not to expect the generalizations to hold true outside of Britain. The case that something quite different was going on in Ireland from a population perspective was systematically made by Professor Ken Connell, a professor at Queen’s University Belfast, over 60 years ago.  Life in Ireland was so different from Britain that surely it could not be generated by the same demographic mechanisms. Since in Ireland several of the factors that reduced mortality in Britain may not apply, Connell argued that Ireland’s population grew by the only other way populations can – increased fertility. Prior to the 19th Century marriage was postponed until the death of the father by which time “the son was no longer a stripling” – thus later marriages were the norm. As the population grew in Britain, the incentive for Irish farmers to provide food for British market grew, and this along with other more local Irish factors provided an incentive for the further subdivision of land holdings, which did indeed become more prevalent. All of this was fueled by the productivity of the potato! A postage-stamp sized farm worked by a manling, his child-bride, and their growing brood could be sustained by potatoes. Since they were married longer, Irish women were exposed for a longer period to childbirth – though the evidence is equivocal on whether this did in fact translate into higher fertility among Irish women.
From this perspective the potato’s main crop was that of healthy cheap labour, and this inexpensively produced Irish laborer allowed landlords to subdivide their properties and maximize their rents.
Professor Connell’s case for Irish exceptionalism seems less secure these days than it did back when he was writing. Connell had been a pioneer of Irish social and economic history and chaired his department at Queen's for a while. A querulous sort, apparently he did not get along well with his colleagues and was removed from his leadership role. He died on 26 September 1973, aged fifty-six, embroiled in a number of controversies and “exhausted and dispirited”. Michael Drake’s paper, Marriage and Population Growth in Ireland, 1750-1845, published in 1963, challenged the statistical basis of Connell’s account, and though subsequently Connell’s thesis remained frequently cited by other scholars it was often to caution against or at the very least comolicate his conclusion.  In 1974 Drake wrote an obituary for Connell who died the year before at age 56 in which he praised him for writing “the first major study of the determinants of population growth in pre-industrial societies to emerge since the 1920s”, and credited him with initiating a much closer scrutiny of this phenomenon. The major criticism he said was that Connell “generalised too widely”. Drake concluded on this sad note: “Certainly in all the years I knew him he budged but little on any issue. Perhaps if he could have done so on those often seemingly trivial non-academic issues which troubled him so much, especially in recent years, he would be with us still.” On a cheerier note Joel Mokyr of Northwestern University (whose office is a few blocks from where I write) and Cormac Ó Gráda from University College Dublin (whose office was a few buildings away from the lab where I worked in the late 1980s) concluded a more recent review of populations with the comment: “Post-famine demographic patterns have fascinated and puzzled researchers too, but it must be said that as yet they have not produced a Connell. As for the period surveyed here, three decades of debate have not exhausted the questions raised by Connell.”
In more recent analysis the point is conceded that despite anecdotal evidence to the contrary the age of marriage in Ireland was not impressively early in Ireland and was closer to the norm for Europe. There is, however, some evidence that marital fertility was greater in Ireland than in Britain. Though there is little hard data to base it upon, the Irish seemed not to have inclined towards the use of any contraceptive strategies even when they knew about them. Charmingly, Irish women of that time were complemented for their chastity and marital fidelity.  To add to the growing thicket of factors contributing to the rapid growth of the Irish population before the famine Jona Schellekens, from Hebrew University of Jerusalem, suggested that marital fertility may have been caused by improved nutrition but also by changes in “the pattern of breastfeeding linked with potato cultivation provide a plausible hypothesis.” 
Can the Irish Great Famine be used as a microcosm for contemplating the potential fate of the world’s population as it surges past 7 billion in the months ahead? After all, as was true in Ireland before the famine, the world has run up its populations impressively since the early 1800s and will a mere couple of centuries later reach 7 billion this autumn. Are we heading, as many environmental thinkers have implied, for a collapse? Was Ireland's famine a predicable Malthusian disaster as some have claimed – a case of a population outstripping its resources? I leave these as open questions for now as I suspect in the months ahead we will be encouraged to reflect upon them. There is a cottage industry of speculation about the degree to which the Irish situation was a Malthusian disaster (I’ll review some of this on my blog). For now, all I want to say in this: Despite the seeming tractability of population issues (growth = births – deaths), it is pretty clear that dissecting the particulars of any one story – in this instance, the simple pattern of population growth on a small damp island before a major famine – it is rarely possible to fully understand the mechanisms driving the pattern. This is precisely because growth models embed such existential matters; motivations lofty and iniquitous, deliberate and capricious, contribute to the births and deaths of humans. And we are a long way from understanding the human condition, or its reflection in the patterns of our births and deaths.
A final thought: Quite a few years ago I invited some close friends over to watch Jude, Michael Winterbottom’s version of Hardy’s novel Jude the Obscure. I had read the book with enormous relish as a teenager in Dublin and had remembered it for its compelling tale of Jude’s desire to be a classics scholar, thinking it in some ways to reflect my own situation. I urged this tale of scholarly ambition on some dear friends. In my callowness I had forgotten a central scene where Jude’s disturbed son murders Sue’s (Jude’s beloved) two children and then hangs himself. The note he leaves for Jude read, "Done because we are too menny” [sic]. As this horrifying scene unfolded on the TV one of our guests started to quietly sob and after a while her husband was obliged to carry his inconsolable wife off to their car. All I could say in pitiable defense was that I had forgotten.
Not to be too melodramatic, but in the months ahead when the now staggering size of the global population is discussed, and we are again invited to contemplate if we are globally too “menny”, recall that though populations are stabilizing in some regions, they are not in other generally poorer countries, and that the patterns of population growth and decline are only approximately well understood. We tend not to be very good at projecting the numbers out too far into the future. Those who fear that the population bow is being pulled globally tight and that disaster is being drawn from the quiver (and starvation is not the only arrow) should not be mollified by confident-sounding predictions that population stabilization is in our near future – perhaps it is, perhaps it is not, we simply cannot be sure. The only thing that seems sure is that if populations stability is deemed desirable we must, to paraphrase population theorist Joel Cohen, be “ready, willing, and able” to determine our own fertility. An expectation that the existential equation ∆P=B-D will crank out uncomplicated results is historically poorly grounded.
 J. Creighton Miller, Jr., H. David Thurston, "Potato, Irish," in AccessScience, ©McGraw-Hill Companies, 2008,
 Joel Mokyr and Cormac Ó Gráda (1984) New Developments in Irish Population History, 1700-1850 The Economic History Review, New Series, Vol. 37, No. 4, pp. 473-488
 Thomas McKeown, R. G. Brown and R. G. Record (1972) An Interpretation of the Modern Rise of Population in Europe. Population Studies Vol. 26, No. 345-382.
 K. H. Connell (1951) Some Unsettled Problems in English and Irish Population History, 1750-1845 Irish Historical Studies Vol. 7(28): 225-234
 C. J. Woods. (2009)”Connell, Kenneth Hugh”. Dictionary of Irish Biography. (Eds.)James Mcguire, James Quinn. Cambridge, United Kingdom:Cambridge University Press.
 Michael Drake (1963) Marriage and Population Growth in Ireland, 1750-1845 The Economic History Review Vol. 16, No. 2 (1963), pp. 301-313
 See Joel Mokyr and Cormac Ó Gráda for details/
 Jona Schellekens (1995) The Role of Marital Fertility in Irish Population History, 1750-1840. The Economic History Review, New Series, Vol. 46, No. 2 (May, 1993), pp. 369-378, p377
August 15, 2011
Globalization / Human Reason
by Wayne Ferrier
Psychiatrists and psychologists have come to the rational conclusion that man is incapable of coming to a rational conclusion. To a certain extent there may be some truth to this. While we are still in the beginning stages of understanding our own minds, we do have three or four good theories on how our mind operates—though we are far from a comprehensive holistic understanding.
All in all many, if not most instances, of reasoning in man is what we call bounded rationality. Bounded rationality holds that when making decisions, the rational thought of individuals is limited by what information is available to them at the time they make decisions, the cognitive limitations of their minds, and the finite amount of time before a decision has to be made. Another way to look at bounded rationality is that, because decision-makers lack the ability and resources to arrive at an optimal solution, they instead simplify the choices available to them. Thus the decision-maker seeks a satisfactory solution rather than an optimal one.
In nature an animal that hesitates and remains indecisive is at a disadvantage to quicker thinking individuals—a deer stunned by car highlights too many times is not likely to survive very long. It makes sense that there are selective pressures from the environment to mold species capable of making decisions based on just a few facts and then choosing a decisive plan of action. Man is such an creature.
Besides bounded rationality, it is also held that man possesses a theory of mind. This is the idea that an individual understands that others may have a view of the world that differs from their own, or even that other's concepts of the world might be fallacious. Among social animals there may be an advantage to individuals that understand that others may not have all the facts and that they can be misled and deceived. And while this is a simplification of a theory of mind and perhaps not everything about this ability need be perceived negative, a theory of mind gives an individual the capability of deception, hence manipulation of others for the benefit of the self over others.
Recently some researchers are suggesting that reason evolved not to understand truth or even reality but that our reasoning ability evolved for the sole purpose to win arguments. Human rationality may be just the impulse to win debates. According to this view, bias and illogic are social adaptations that enable one to persuade and defeat others in arguments—certitude being more important than what the truth may actually be.
This theory of argumentation is strongly tied to well-known and long held concepts of human thought and behavior, in particular cognitive dissonance. Cognitive dissonance is when people are biased to think their choices are correct, in spite of overwhelming evidence to the contrary that they are not.
So when you add it up: bounded reason (quick decisions based on limited information); theory of mind and the view that others don't have all the facts and are thus fallible and fool-able; cognitive dissonance (that in spite of evidence to the fact that we might be wrong we think we are right); holding onto incorrect views of the world in spite of the facts; regardless of all this we argue on, purposefully filtering out contrary evidence and valuable information just so we can hold onto our cherished positions and manipulate others.
It is unfortunate but exceedingly interesting that decision-makers often adhere to immobile positions irrespective of the facts. But the adversarial-argumentative approach is a lose-lose proposition most of the time.
Science, which is supposed to be based on empiricism rather than on a priori reasoning, intuition, or revelation, is an ideal solution to the adversarial-only approach, but many so-called scientific voices are not trustworthy. For example, consider the argument concerning climate change. Engaging the scientific community in a discussion about climate change more often than not degenerates into Ad Hominems such as: “You're not scientific if you don't believe in global warming,” or “If you don't believe in global warming you probably don't believe in the law of gravity either.” Often instead of producing facts, we are told the time for discussion is over, that global warming is real and we need to act now, not question it anymore. Global warming is the only hypothesis in science, it seems, that skipped right over becoming a theory and somehow became a physical law in just a few short decades of research.
Major problems worldwide
Now here is a list of real problems, which I feel that bounded reason, cognitive dissonance, and irrational arguing are unlikely to solve; yet we need to address these issues if we are to survive and thrive as a species. Each problem is factual and is integrally entwined with the others, so that each problem affects all the others in a web of complexity. Our current system of thinking is inadequate to solve any of them satisfactory alone, let alone each woven together.
Regardless of climate change, global warming or no global warming, climate affects weather and weather affects agriculture. We no longer have a worldwide food surplus, in part because of bad weather and in part because of overpopulation. The rise of China and India and other countries has eaten into our surplus and just a season or two of bad weather has sparked ugly situations such as the Arab Spring. In addition, crisis such as the Arab Spring reflect intolerable social inequality, as well as the possibility of impeding starvation due to crop failure. Like dominoes the Arab Spring caused a rise in oil prices affecting western countries dependent of foreign oil. So here in the West we're feeling high fuel prices, high food prices, chronic unemployment, social inequality, and ineffectual government—and the weather isn’t fun either! So what causes all this? Simple, overpopulation; overpopulation can be tried to just about any major modern problem: disease, famine, habitat destruction, pollution, war, you name it; and if you believe in climate change via man-produced carbon emissions, also the weather.
Globalization and economy
It is getting hard to ignore that the time of nations is ending. It has been clear for a while that no nation exists on its own anymore, and what happens to one country affects everyone else. The United States never got out of WWII mode, it went right into the Cold War, and when that ended got into a lengthy and expensive war against terrorism; in that sixty-five year period, it ignored the creation of meaningful employment and living wage jobs. Instead it focused on supporting rising inequality and it failed to fix its malfunctioning educational system, which has left America's workforce poorly educated, unemployed or marginally employed, many are living in substandard housing, on the streets, in prison, or are just a paycheck away from living in the streets or in prison. How long do we wait before we have an American Spring? When the American economy finally collapses, so goes the rest of the world.
Inequality, low or no wage employment is leading to a massive brain drain in America. Then there is the crumbling infrastructure—left untouched since the brief economic boom after WWII. Here are meaningful jobs but nothing is being done.
In the meantime all we do is argue, but we don't listen, nor do we analyze.
A snippet from the NPR Radio Program WAIT WAIT. . .DON'T TELL ME says it best:
PETER SAGAL: It turns out, the reason human beings developed intelligence was not to be better hunters or better survive against other species, but to win arguments. See, the thing that has always puzzled people about human intelligence, how humans got so smart, is why humans are still so stupid.
(Soundbite of laughter)
PETER SAGAL: Because we continually believe things that are incorrect and behave irrationally. And so people evolved, it turns out, the ability to convince themselves they were right even when they were full of it. You see, that's the explanation.
MO ROCCA: That's interesting.
FAITH SALIE: Does this mean that politicians are the most evolved among us?
PETER SAGAL: Exactly.
(Soundbite of laughter)
August 01, 2011
Kipple and Things: How to Hoard and Why Not To Mean
This paper (more of an essay, really) was originally delivered at the Birkbeck Uni/London Consortium ‘Rubbish Symposium‘, 30th July 2011
Living at the very limit of his means, Philip K. Dick, a two-bit, pulp sci-fi author, was having a hard time maintaining his livelihood. It was the 1950s and Dick was living with his second wife, Kleo, in a run-down apartment in Berkley, California, surrounded by library books Dick later claimed they “could not afford to pay the fines on.”
In 1956, Dick had a short story published in a brand new pulp magazine: Satellite Science Fiction. Entitled, Pay for the Printer, the story contained a whole host of themes that would come to dominate his work
On an Earth gripped by nuclear winter, humankind has all but forgotten the skills of invention and craft. An alien, blob-like, species known as the Biltong co-habit Earth with the humans. They have an innate ability to ‘print’ things, popping out copies of any object they are shown from their formless bellies. The humans are enslaved not simply because everything is replicated for them, but, in a twist Dick was to use again and again in his later works, as the Biltong grow old and tired, each copied object resembles the original less and less. Eventually everything emerges as an indistinct, black mush. The short story ends with the Biltong themselves decaying, leaving humankind on a planet full of collapsed houses, cars with no doors, and bottles of whiskey that taste like anti-freeze.
In his 1968 novel Do Androids Dream of Electric Sheep? Dick gave a name to this crumbling, ceaseless, disorder of objects: Kipple. A vision of a pudding-like universe, in which obsolescent objects merge, featureless and identical, flooding every apartment complex from here to the pock-marked surface of Mars.
“No one can win against kipple,”
“It’s a universal principle operating throughout the universe; the entire universe is moving toward a final state of total, absolute kippleization.”
In kipple, Dick captured the process of entropy, and put it to work to describe the contradictions of mass-production and utility. Saved from the wreckage of the nuclear apocalypse, a host of original items – lawn mowers, woollen sweaters, cups of coffee – are in short supply. Nothing ‘new’ has been made for centuries. The Biltong must produce copies from copies made of copies – each replica seeded with errors will eventually resemble kipple.
Objects; things, are mortal; transient. The wrist-watch functions to mark the passing of time, until it finally runs down and becomes a memory of a wrist-watch: a skeleton, an icon, a piece of kipple. The butterfly emerges from its pupae in order to pass on its genes to another generation. Its demise – its kipple-isation – is programmed into its genetic code. A consequence of the lottery of biological inheritance. Both the wrist-watch and the butterfly have fulfilled their functions: I utilised the wrist-watch to mark time: the ‘genetic lottery’ utilised the butterfly to extend its lineage. Entropy is absolutely certain, and pure utility will always produce it.
In his book Genesis, Michel Serres, argues that objects are specific to the human lineage. Specific, not because of their utility, but because they indicate our drive to classify, categorise and order:
“The object, for us, makes history slow.”
Before things become kipple, they stand distinct from one another. Nature seems to us defined in a similar way, between a tiger and a zebra there appears a broad gap, indicated in the creatures’ inability to mate with one another; indicated by the claws of the tiger and the hooves of the zebra. But this gap is an illusion, as Michel Foucault neatly points out inThe Order of Things:
“…all nature forms one great fabric in which beings resemble one another from one to the next…”
The dividing lines indicating categories of difference are always unreal, abstracted from the ‘great fabric’ of nature, and understood through human categories isolated in language.
Humans themselves are constituted by this great fabric: our culture and language lie on the same fabric. Our apparent mastery over creation comes from one simple quirk of our being: the tendency we exhibit to categorise, to cleave through the fabric of creation. For Philip K. Dick, this act is what separates us from the alien Biltong. They can merely copy, a repeated play of resemblance that with each iteration moves away from the ideal form. Humans, on the other hand, can do more than copy. They can take kipple and distinguish it from itself, endlessly, through categorisation and classification. Far from using things until they run down, humans build new relations, new meanings, carefully and slowly from the mush. New categories produce new things, produce newness. At least, that’s what Dick – a Platonic idealist – believed.
At the end of Pay for the Printer, a disparate group camp in the kipple-ised, sagging pudding of a formless city. One of the settlers has with him a crude wooden cup he has apparently cleaved himself with an even cruder, hand-made knife:
“You made this knife?” Fergesson asked, dazed.
“I can’t believe it. Where do you start? You have to have tools to make this. It’s a paradox!”
In his essay, The System of Collecting, Jean Baudrillard makes a case for the profound subjectivity produced in this apparent newness.
Once things are divested of their function and placed into a collection, they:
“…constitute themselves as a system, on the basis of which the subject seeks to piece together [their] world, [their] personal microcosm.”
The use-value of objects gives way to the passion of systematization, of order, sequence and the projected perfection of the complete set.
In the collection, function is replaced by exemplification. The limits of the collection dictate a paradigm of finality; of perfection. Each object – whether wrist-watch or butterfly – exists to define new orders. Once the blue butterfly is added to the collection it stands, alone, as an example of the class of blue butterflies to which the collection dictates it belongs. Placed alongside the yellow and green butterflies, the blue butterfly exists to constitute all three as a series. The entire series itself then becomes the example of all butterflies. A complete collection: a perfect catalogue. Perhaps, like Borges’ Library of Babel, or Plato’s ideal realm of forms, there exists a room somewhere with a catalogue of everything. An ocean of examples. Cosmic disorder re-constituted and classified as a finite catalogue, arranged for the grand cosmic collector’s singular pleasure.
The problem with catalogues is that absolutely anything can be collected and arranged. The zebra and the tiger may sit side-by-side if the collector is particularly interested in collecting mammals, striped quadrupeds or – a particularly broad collection – things that smell funny. Too much classification, too many cleaves in the fabric of creation, and order once again dissolves into kipple. Disorder arises when too many conditions of order have been imposed.
“[W]e must think of chaos not as a helter-skelter of worn-out and broken or halfheartedly realised things, like a junkyard or potter’s midden, but as a fluid mishmash of thinglessness in every lack of direction as if a blender had run amok. ‘AND’ is that sunderer. It stands between. It divides light from darkness.”
Collectors gather things about them in order to excerpt a mastery over the apparent disorder of creation. The collector attains true mastery over their microcosm. The narcissism of the individual extends to the precise limits of the catalogue he or she has arranged about them. Without AND language would function as nothing but pudding, each clause, condition or acting verb leaking into its partner, in an endless series. But the problem with AND, with classes, categories and order is that they can be cleaved anywhere.
Jorge Luis Borges exemplified this perfectly in a series of fictional lists he produced throughout his career. The most infamous, Michel Foucault claimed influenced him to write The Order of Things, the list refers to a “certain Chinese encyclopaedia” in which:
Animals are divided into
- belonging to the Emporer,
- sucking pigs,
- stray dogs,
- included in the present classification,
- drawn with a very fine camelhair brush,
- et cetera,
- having just broken the water pitcher,
- that from a long way off look like flies…
In writing about his short story The Aleph, Borges also remarked:
“My chief problem in writing the story lay in… setting down of a limited catalog of endless things. The task, as is evident, is impossible, for such a chaotic enumeration can only be simulated, and every apparently haphazard element has to be linked to its neighbour either by secret association or by contrast.”
No class of things, no collection, no cleaving of kipple into nonkipple can escape the functions of either “association OR contrast…” The lists Borges compiled are worthy of note because they remind us of the binary contradiction classification always comes back to:
- Firstly, that all collections are arbitrary
- and Secondly, that a perfect collection of things is impossible, because, in the final instance there is only pudding “…in every lack of direction…”
Human narcissism – our apparent mastery over kipple – is an illusion. Collect too many things together, and you re-produce the conditions of chaos you tried so hard to avoid. When the act of collecting comes to take precedence over the microcosm of the collection, when the differentiation of things begins to break down: collectors cease being collectors and become hoarders. The hoard exemplifies chaos: the very thing the collector builds their catalogues in opposition to.
To tease apart what distinguishes the hoarder, from the collector, I’d like to introduce two new characters into this arbitrary list I have arranged about myself. Some of you may have heard of them, indeed, they are the brothers whom the syndrome of compulsive hoarding is named after.
Brothers, Homer and Langley Collyer lived in a mansion at 2078, Fifth Avenue, Manhattan. Sons of wealthy parents – their father was a respected gynaecologist, their mother a renowned opera singer – the brothers both attended Columbia University, where Homer studied law and Langley engineering. In 1933 Homer suffered a stroke which left him blind and unable to work at his law firm. As Langley began to devote his time entirely to looking after his helpless brother, both men became locked inside the mansion their family’s wealth and prestige had delivered. Over the following decade or so Langley would leave the house only at night. Wandering the streets of Manhattan, collecting water and provisions to sustain his needy brother, Langley’s routines became obsessive, giving his life a meaning above and beyond the streets of Harlem that were fast becoming run-down and decrepid.
But the clutter only went one way: into the house.
On March 21st 1947 the New York Police Department received an anonymous tip-off that there was a dead body in the Collyer mansion. Attempting to gain entry, police smashed down the front-door, only to be confronted with a solid wall of newspapers (which, Langley had claimed to reporter’s years earlier his brother “would read once his eyesight was restored”.) Finally, after climbing in through an upstairs window, a patrolman found the body of Homer – now 65 years old – slumped dead in his kippleised armchair. In the weeks that followed, police removed one hundred and thirty tons of rubbish from the house. Langley’s body was eventually discovered crushed and decomposing under an enormous mound of junk, lying only a few feet from where Homer had starved to death. Crawling through the detritus to reach his ailing brother, Langley had triggered one of his own booby traps, set in place to catch any robbers who attempted to steal the brother’s clutter.
The list of objects pulled from the brother’s house reads like a Borges original. FromWikipedia:
Items removed from the house included baby carriages, a doll carriage, rusted bicycles, old food, potato peelers, a collection of guns, glass chandeliers, bowling balls, camera equipment, the folding top of a horse-drawn carriage, a sawhorse, three dressmaking dummies, painted portraits, pinup girl photos, plaster busts, Mrs. Collyer’s hope chests, rusty bed springs, a kerosene stove, a child’s chair, more than 25,000 books (including thousands about medicine and engineering and more than 2,500 on law), human organs pickled in jars, eight live cats, the chassis of an old Model T Ford, tapestries, hundreds of yards of unused silks and fabric, clocks, 14 pianos (both grand and upright), a clavichord, two organs, banjos, violins, bugles, accordions, a gramophone and records, and countless bundles of newspapers and magazines.
Finally: There was also a great deal of rubbish.
A Time Magazine obituary from April 1947 said of the Collyer brothers:
“They were shy men, and showed little inclination to brave the noisy world.”
In a final ironic twist of kippleisation, the brothers themselves became mere examples within the system of clutter they had amassed. Langley especially had hoarded himself to death. His body, gnawed by rats, was hardly distinguishable from the kipple that fell on top of it. The noisy world had been replaced by the noise of the hoard: a collection so impossible to conceive, to cleave, to order, that it had dissolved once more to pure, featureless kipple.
Many hoarders achieve a similar fate to the Collyer brothers: their clutter eventually wiping them out in one final collapse of systemic disorder.
But what of Philip K. Dick....?
In the 1960s, fuelled by amphetamines and a debilitating paranoia, Dick wrote 24 novels, and hundreds of short stories, the duds and the classics mashed together into an indistinguishable hoard. UBIK, published in 1966, tells of a world which is itself degrading. Objects regress to previous forms, 3D televisions turn into black and white tube-sets, then stuttering reel projectors; credit cards slowly change into handfuls of rusted coins, impressed with the faces of Presidents long since deceased. Turning his back for a few minutes a character’s hover vehicle has degraded to become a bi-propeller airplane.
The Three Stigmata of Palmer Eldritch, another stand-out novel from the mid 60s, begins with this memo, “dictated by Leo Bulero immediately on his return from Mars”:
“I mean, after all; you have to consider we’re only made out of dust. That’s admittedly not much to go on and we shouldn’t forget that. But even considering, I mean it’s a sort of bad beginning, we’re not doing too bad. So I personally have faith that even in this lousy situation we’re faced with we can make it. You get me?”
July 25, 2011
Brain, liquefaction of
The following is an excerpt from my unpublished manuscript “A Shorter History of Bodily Fluids”
Brain, liquefaction of: also known as encephalomalacia (from the Greek, μαλακία softening), necrencephalus (from Greek, νεκρο + κεϕαλή deadhead), ramollissement cérébral (from the French ramollissement cérébral), cerebromalacia (from the Greek, μαλακία a colloquial onanist, esp a vehicular onanist; cf blood, semen), cerebral softening (from the Old English soft meaning soft), or more commonly, softening of the brain (pronounced US /breɪn/). When the tissue affected is white matter it is called leukoencephalomalacia; polioencephalomalacia refers to necrosis of the gray matter. This condition may manifest as multiple necrotic fluid-filled cavities replacing healthy brain tissue. It is preferable to inspect this necrosis post-mortem especially if attempting to administer home remedies. If you are a sheep the following suite of symptoms will be diagnostically useful in identifying brain liquefaction: somnolence, short sightedness, ataxia (poor coordination), head pressing, tumblesaulting, walking in circles, walking bipedally, excessive bleating or bleating in prime numbers, and terminal coma. I treated a mouse once that after a fall complained to me that she could only walk in circles. It greatly affected her travel plans and she died penniless, vastly undereducated, and living very close to where she was born.
If after munching on yellow star thistle (Centaurea solstitialis) you become excessively sleepy or find yourself given to aimless wandering and go off your feed, you might be a horse. Unfortunately you also have a condition called nigropallidal encephalomalacia. Avoid prehending Russian Knapweed. If you are a chicken and have ataxia, paralysis, severe softening of the brain, and are brooding excessively on death you have “crazy chick disease”. Take vitamin E capsules with your feed and avoid gassy foodstuffs. Rhinoceroses should also remember to regularly get their vitamin E levels assessed; consider doing so even between regular checkups. If you are a Rhinoceros be vigilant for signs of depression; if you are feeling down, just pop in to your vet. If your condition has progressed to coma, its best to have him visit you.
Clinical notes of liquefaction of the brain
Fragment from the journal of Dr K, of Naumburg
“I had a patient today (to protect his anonymity I will refer to him as Master F Nietzsche) who presented with headaches. Friedrich is 18. He is a squat young man, moody and diffident; short sighted in one eye, long-sighted in the other. The locations of his headaches are worth remarking; one of them was on his glabella one of the supraorbital processes, another very thin headache runs along the coronal suture, one on the patellar grove, and there is a persistent one above his pronounced ischial callosities. N complains of cephalalgia throughout his body. He is also suffering from a great despondency which expressed itself in a fixed stare and excessive sighing. Apparently his father went blind and wasted away, dying young from liquefaction of the brain. He fears this same fate. I recommended a companion animal to him but he muttered that his dog was already dead, or was it that the log is painted red? I prescribed fresh air, a moustache, and morose meditation.” (translation mine)
The ramollissement of Mr P
I had occasion to work quite recently with William Madden, MD, Physician of the Torbay Infirmary and Dispensary on the following fascinating case of ramollissement of the grey matter of the medulla. Our patient, Mr P came under our care in the late summer of 1838. Mr P had engaged in heavy drinking with some rowdy boys, greedily joining in on their excessive imbibitions. After this he developed a burning pain on the instep of his left foot. He lost much of the feeling in the ailing foot and the lower part of the leg. When he walked it felt as though he were walking upon “heaps of warm bran.” After a chilly journey to Roslin a few miles from his home his face stiffened on the side closest to the carriage window. Dr Madden and I prescribed the following usually very efficacious cures: bleeding, blistering of the head and spine, and severe purgation – these continuing for several days, ceasing only when Mr P partially lost his vision. Naturally enough we tried galvanism though I am not inclined to inform you how much we shocked the ailing man as Dr Madden and I disagreed on precisely this point. Alas after six tries Mr P abandoned the cure. He also refused more bleeding. His family reported that he was becoming increasingly irritable and burdensome at home. His bowels remained open and his stools loose but not excessively so (cf. Stool, runny). As the days wore on the pain increased and the patient’s arms were in constant motion. We bled him, draining him to the point that his pulse dropped and then administered a purgative to his unwilling bowels. He slept poorly but his bowels were productive. We bled him, and bled him again. Finally the sensations came back to his feet after which Mr P died. The sectio cadaveris performed forty-two hours after death revealed that the ventricles were distended with fluid, with much of it spilling over into the spinal canal. Other parts of the brain were pulpy. The center of the spinal cord had become completely fluid.
A case of brain shrinkage and liquefaction
During the post-mortem examination of a Mr S I found that when I sawed open his head there was a very significant quantity of clear serum on the surface of the brain. I had treated this man alongside Dr Thomas Nunnelley. You probably know Nunnelley as the surgeon to the Leeds General Eye and Ear Infirmary. Mr S suffered from wakeful nights and complained of heat in his head. After he was seized by a fit in September 1841 Dr Nunnelley and I suspected acute liquefaction of the brain. The patient was cupped, leeched, blistered, and administered mercurous chloride, henbane with camphor, and strychnine. Naturally, he improved. Little changed in his condition with the exception of the growing offensiveness of his language, something he was not inclined towards when in good health. Additionally he took to yelling out “Oh dear! Oh dear!” or would occasional mutter to the servants “Is there Mary”, or “What do you say Charles”. I am reminded here of the case reported to me from my colleague Dr G of Genoa who related that as the Irish leader Daniel O’Connell lay dying of softening of the brain he repeatedly murmured “Jesus…Jesus…Jesus…”. The “Liberator” and Member of Parliament for Dublin died in 1847 a year after Mr S. To continue, Mr S’s bowels were constipated. After his fit he lingered for two years and died in his chair. As I said, when I examined him postmortem the surface of the brain was excessively wet. When I dissected the hemisphere I found the ventricles distended with serum and the lining of the ventricles were pultaceous. I have never seen such a small cerebellum. I did not have an opportunity to weigh this organ.
A note on sources
I am especially indebted to my former student, the late Professor E Z, whose magisterial General and Special Pathology, originally published in 1881, usefully synthesized our current clinical knowledge of the liquefaction of necrotic tissue. Z was Professor of Pathology in the University of Freiburg; before this he was Chair of Pathology and Morbid Anatomy in the University of Zurich and later at Tubingen. Beloved by his students, his specialty was in “tubercle” and in the cellular nature of the inflammation. Another discovery of Z’s: “All life”, he said, “comes soon or later to an end – to death.” [Emphasis Z’s]. This fact I suppose was well enough known before this time; science, however, often calls for the bold statement of the obvious. Yet another insight of Professor Z’s “When death occurs prematurely…it must be regarded as a pathological phenomenon.” At the time of Z’s death we were working up our autopsy notes on the case of a retired philologist from the University of Basel. This man had gained some notoriety as a philosopher-poet. Our philologist had lapsed into a demented silence after his 1889 collapse in Turin, and had eventually died on August 25th 1900 after a series of apopleptic fits. Though tertiary cerebral syphilis was suspected, Drs Binswanger and Ziehen, the philologist’s physicians, contrary to the desire of his sister, requested a post-mortem confirmation of the diagnosis. Alas, our dear Professor Z died in Freiburg at aged 56 before we completed the manuscript. The location of the autopsy notes is unknown at this time. I shall reconstruct them at a later stage as it has not escaped my notice that there has been some speculation among the greater public on this case. The philologist is buried next to his beloved father in Röcken.
I extend gratitude to my colleagues Drs Madden and Nunneley for sharing with me their notes and manuscripts (listed below) on these edifying cases of liquefaction of the brain; these amply jogged my memory which has become diminished of late.
Krell, David F and Bates, Donald L. (1999) The Good European Nietzsche's Work Sites in Word and Image University Of Chicago Press
Madden, William H. (1850) Illustrations of Diseases of the Nervous System London Journal of Medicine, Vol. 2, No. 13 (Jan., 1850), pp. 10-16
Miller, R. Eric, Richard C. Cambre, Alexander de Lahunta, Roger E. Brannian, Terry R. Spraker, Carol Johnson, William J. Boever Encephalomalacia in Three Black Rhinoceroses (Diceros bicornis). Journal of Zoo and Wildlife Medicine, Vol. 21, No. 2 (Jun., 1990), pp. 192-199
Nunneley, Thomas (1846) Case of Diminished Brain: Provincial Medical and Surgical Journal (1844-1852), Vol. 10, No. 26 pp.297-299.
O'Faoláin, Seán (1938) King of the beggars: a life of Daniel O'Connell, the Irish liberator, in a study of the rise of the modern Irish democracy (1775-1847). The Viking Press.
Thom, Alexander (1906) Ernst Ziegler, M.D., Professor Of Pathology, University Of Freiburg. The British Medical Journal, Vol. 1, No. 2352, pp. 236-237
Ziegler, E (1898) General Pathology. Translated by Aldred Scott Warthin. William Wood and Company
July 18, 2011
Sunday Morning in a Northeastern Old Growth Forest
God is the experience of looking at a tree and saying, "Ah!"
Most people, who reside in the Northeastern United States, don’t know that there are remains of old growth forests scattered here and there among them. And most don’t care. The human species is not hard-wired to appreciate these things. The people who do appreciate them have a difficult time digesting this, but it’s true. Most people’s world view is a social reality imprinted and reinforced by the way other human beings look at the world. Human beings are social animals and few could survive alone in the wilderness; they’d starve or succumb to the elements. However, most would lose their sanity long before the unforgiving laws of nature would get them. We see this phenomenon in our prisons, where inmates prefer to be out in the yard even if “out in the yard” there are other inmates waiting there to kill them. Being killed by one’s fellows is far more preferable than the worst of fates—solitary confinement. In ancient times the worst thing that could happen to you was banishment.
Natural selection has certainly predisposed human beings to be with other human beings, to gravitate towards other human beings even if they don’t like them, and to see things the way other human beings do because it enhances their survival. Human beings trade reality for social reality. Yes there are differences between people but the differences are minor when compared to the way things are outside of our towns and cities. Anyone who has studied science, for example, knows that the universe doesn’t work—not even remotely—the way that most of human society thinks it does. And this may be a reason why many people have a hard time with science—it violates one’s sense of reality in much the same way that psychoactive drugs like LSD do, by dismantling and reassembling one’s perception of the universe.
Nature can be just as trying. If you are “out there” too long it can alter your state of mind by changing your perception of it. Few people can handle this. But a certain few do, and these folks might have a predisposition or a domain specificity towards nature—the circuitry of their nervous system is geared to specialize in that specific kind of reality. Scientists might be wired differently; naturalists might be wired differently; police might be, and also emergency responders, teachers, morticians, mechanics; each having the generalized social intelligence we all share, while specializing in an area that others know nothing about. But for most of us the idea of back to nature might be a myth. In our past we might have been closer to nature, but we probably were never truly happy living in it as a group.
And for the planet this may be a good thing. Towns and cities, artificial as they are, might have saved the rest of the planet from our kind. If human beings didn’t concentrate in highly populated areas, they would be more uniformly spread out across the continents and human beings are harder on the environment than a herd of elephants is. So cities it is!
Eastern old growth forests are few and far between but they do exist. It’s not fair to compare the trees in them, in size or age, to the impressive stands in the western United States. Redwoods of the Sierra Nevada can be as old as 1,500 to 3,000 years and reach 280 feet. Foxtail White Pines can get even older, though not so impressive in size; some Bristlecones in this family are reputed to be nearly 5000 years old! East Coast species are junior members in this venerable club.
However, when Europeans first came to Northeastern North America they were faced with a sea of old growth forests, which was something they were not quite used to. The first attempts to establish colonies here ended in disaster. With sheer persistence the Puritans succeeded by intensifying the rigidity of their social structure and hugging the coasts. While north and south of them, more adventurous individuals penetrated into the New Hampshire and Pennsylvania wilderness and tried to tame it. Early on the Dutch made forays into Upstate New York but didn’t last. The French were more adaptable befriending certain native tribes of Indians and penetrating deep into the interior—they were a special breed.
The Woodland Indians themselves were closer in spirit to these forests than the Europeans were. But even they kept to their village life most of the time. They slashed and burned the forests and planted fields of corn, beans, and squash; they had orchards that were the envy of their white neighbors. The strongest of these were the Haudenosaunee or commonly called the Iroquois, and they were quite an advanced civilization. They had a sophisticated government, and their extensive roads and trails stretched from the Atlantic to the Great Lakes and from Canada to Pennsylvania. They managed to balance the power between the English and the French, and the numerous other tribes to the north, to the south, to the east and to the west.
The British had been cutting the trees in New England. And the Eastern White Pine was especially coveted. It was said that there was so much White Pine in the Northeast woodlands that a squirrel could spend a squirrel’s lifetime hopping from one branch to another and never reach the end of it. Straight and tall, light and sturdy, relatively weather resistant, the tallest White Pines made the best masts for sailing ships, and England was engaged, at the time, in major conflicts with France—good ships were necessary. And when these were gone the timber was used for just about everything else. Early American was built on white pine, and much of it was exported to the rest of the world as well.
The American settlers didn’t appreciate the French who, along with their Indian allies, would make life exceeding difficult to anyone who had the guts to penetrate and try to tame the interior. But when the French were defeated in the French & Indian War, the Americans became more irritated with the British Government, who wanted the timber for themselves, who demanded first dibs on the cod fisheries, who wanted to control the rum and slave trades—all very lucrative—and the British Government wanted the Americans to pay their taxes to help reimburse the British for that costly war with the French; and the Americans we not willing to do that.
All in all once the Brinish were defeated the Americans again moved into the interior, cutting trees, clearing fields for farmland, and establishing forts and villages. Only the powerful Iroquois stood in their way; but after one skirmish too many Washington lost patience and sent troops in to wipe the Haudenosaunee from the face of the earth. The Sullivan Campaign moved into Iroquoia, burned their villages, chopped down their orchards, and destroyed their fields; and anything else they could find. Without their orchards, without their fields, without their grain stores, the Indians were as helpless as any white man facing the elements of the northeastern forests and the coming winter. The Iroquois either retreated to Canada or faced starvation.
With the Iroquois out of the way pioneers quickly moved into the interior, at first hunting, fishing, and trapping, then logging and farming. A lot of timber was burned simply to make charcoal and potash or roof shingles. Virgin soils were farmed, depleted, and then the farms abandoned. Much of New England is forests that have taken over and reclaimed abandoned farmland.
By the turn of the century, most of the virgin timber, as far west as Minnesota, had been cut. Clear-cutting continued into the 1950s and today, we are left with juvenile forests—unhealthy ecosystems infested with disease.
Eastern forests are reviving, but it will be centuries before they become as they once were. The Appalachians are aggressive mountains. Time and again people move in from the city, cut everything down, bulldoze out a driveway and plant a big lawn; they put up a pool, and try to grow a lot of exotics. They display their plastic pink flamingos and ride their expensive lawn mowers in the pursuit of the American Dream. Ah, social reality. Give or take a decade our cozy family is divorced or deceased, or worse—surrendered to the forest. The yard is unkempt and the forest is back. The natural state of the Northeast is forest.
But for now I live in the city and the fight goes on. I share my city with deer, possums, skunks, ground hogs, squirrels, birds—critters who have gotten used to this—and people. Every summer without fail some guy with a little too much testosterone goes out and rents a chainsaw and the cutting ritual begins again. This year my neighbors cut down a beautiful 75-year-old maple, so they could set fireworks off on the Fourth of July—sigh. So now I need to get out of here for awhile and spend my Sundays lying on a carpet of pine needles, listening to the sweet sounds of a thrush welcoming the morning and stare up at a 150 foot tree in an old growth stand and contemplate. Was this a seedling when Shakespeare wrote Hamlet? Was it 100 years old before the first white setters ever made it to these parts?
July 11, 2011
Babies, Breast Milk, and Bifidobacteria
by Meghan Rosen
Earlier this year, a London ice cream parlor debuted an attention-grabbing new flavor that made headlines around the world and sold out within days. The flavor, Baby Gaga, was infused with Madagascan vanilla and lemon zest and served in a martini glass chilled with liquid nitrogen. But at over $22 a serving, customers weren’t coming for its gourmet spices or upscale presentation; they were coming for its star ingredient, its claim to fame: human breast milk.
Just a week after giving birth, women who exclusively breastfeed produce, on average, more than 500 milliliters of milk per day. In parlor measurements, that’s about a pint of liquid. At 6 weeks, this amount has typically increased by about 50%; in some highly productive women, it can even double. For women with an abundant supply, excess milk can be drawn out with an electric pump and stored for future consumption (by baby, or in London, by high-paying ice cream connoisseurs.)
In an interview with the Daily Mail, the London parlor’s proprietor played up the novelty of his new flavor, but his description of its taste (‘creamy and rich’) was comfortably familiar. Flavor-wise, how does milk from humans compare to milk from cows? Can you even taste a difference? I don’t live in London, but I do have an ice cream maker. It’s in my freezer, right next to 2 liters of frozen breast milk.
Three weeks after the birth of my daughter, nightly pumping sessions left me with an unexpected, but not altogether unwelcome problem: I ran out of bottles to store milk in. (It’s not uncommon for women to produce too much or too little milk; it often takes weeks to establish a supply that matches the baby’s appetite.) After moving on to glass jars, ice cube trays, and finally, proper storage baggies, I had amassed enough milk to make more than 100 servings of ice cream (following Baby Gaga recipe proportions).
As bodily fluids go, breast milk is not an unlikely candidate for dessert innovation. After all, the most abundant component is sugar; the next is fat. Those two ingredients are about all you need to make a tasty frozen treat, and since a mother’s milk is steeped in the flavors, smells, and colors of what she eats, additives may even be unnecessary. A garlicky dinner, for example, predictably changes the taste of human breast milk, and babies tend to like it. One study even found that babies preferred their mother’s garlic-imbued milk to milk that was garlic-free.
After sugar and fat, the third most common component of human breast milk is not what you might think: it’s not protein, it’s not vitamins, in fact, it’s not even digestible by babies. Human milk includes a hefty proportion of molecules called oligosaccharides, or HMOs, (essentially long chains of simple sugars linked together in different conformations) that travel from the mother’s breast to the infant’s mouth and pass right on through its digestive tract.
Until recently, scientists considered these compounds just a bulky byproduct of lactation; after all, if it didn’t directly provide nutrition for the baby, what could it be good for?
But making milk isn’t free; for the mother, it’s actually quite expensive. It takes about 500 calories to fill and continually restock the breasts with a baby’s daily nourishment. Typically, a woman will burn fat stored during pregnancy or simply increase her food intake to meet the demand, but if she’s not getting enough nutrients, her body will tap into its own emergency reserves (like her bones or her teeth) to provide the baby with what it needs.
Rich milk makes for chubby, healthy babies, and healthy babies have a greater chance at survival, but it’s a finely balanced system: take too much from the mother and her health may be at risk. If every part of the milk comes at a cost, it’s unlikely that any part would be extraneous (especially those that are most abundant). Why waste the calories?
The initial understanding of HMOs wasn’t exactly wrong – babies can’t use the long chains of sugar as a source of nutrition – but it was missing one key point: other organisms can. HMOs may be indigestible by humans, but they’re the perfect food source for bacteria: in particular, Bifidobacterium longum infantis, a species that’s specialized to live in a baby’s gut.
Researchers at UC Davis have shown that bifidobacteria have a unique set of genes that is particularly suited for allowing growth in an infant’s intestine, where HMOs are abundant. Their work, profiled in the NY Times last year, helps explain why humans may have evolved to invest so heavily in a milk ingredient that is, for us, inedible.
Because bifidobactera thrive on HMOs, they have a leg up on other, less benevolent bacteria that are also clamoring for a home in the intestine. The well-fed bifidobacteria crowd out potential pathogens, effectively protecting the baby from infections. Breastfed babies tend to have fewer intestinal diseases and less constipation than their formula-fed counterparts: much of this is attributed to a gut full of beneficial bacteria living in harmony with their newborn human host.
Besides cultivating a community of ‘good’ intestinal bacteria, HMOs are also thought to trick ‘bad’ bacteria by mimicking the cells lining a baby’s gut. Instead of attaching to the baby’s cells and sneaking past its defenses to start an infection, pathogens bind to HMOs (which are replenished every time the baby nurses) and are flushed out with the waste.
Breast milk is tailor made for guarding a baby’s newly developing immune system, (according to the World Health Organization, it’s the best thing parents can feed their infants) and many people are willing to pay a premium for it. For mothers with milk supply problems, there’s an unregulated, craigslist-style market where human breast milk can fetch more than $2.50 an ounce, and women advertise their milk as ‘organic’, ‘vegetarian’, and, ‘free-range’. (The FDA does not approve.)
Human milk is a hot commodity, and not just for new parents. At OnlyTheBreast.com, among buyer listings for ‘Local Milk’ and ‘Special Diet Milk’, there’s also a category for ‘Men Buying Milk’. (As of today, there were 17 buyers.)
Although breast milk is the gold standard for baby food, its cost can be prohibitive (unless you are making your own, human milk is much more expensive than formula), and its quality is not guaranteed (infectious diseases can be passed through milk, and there’s no screening in place to protect potential buyers). Current formula alternatives attempt to imitate human milk, but lack the immune-protective benefits and bacterial-promoting pre-biotics (like HMOs).
It might be possible, however, to create a more milk-like formula by studying human breast milk; this could give premature infants (whose mothers’ milk often takes longer to come in) a healthier start to life. Donated milk, however, is in short supply for milk-banks, and even shorter supply for research. A lactation consultant at UC Davis told me milk researchers on campus were always thrilled to receive human milk donations because they're not easy to come by. Unless, like me, you happen to have a freezer full of them. And live in Davis. For now, I think home made ice cream may have to wait.
July 04, 2011
Three Island Stories
by Kevin Baldwin
Islands have always been fascinating places. The old story-tellers, wishing to recount a prodigy, almost invariably fixed the scene on an island — Faery and Avalon, Atlantis and Cipango, all golden islands just over the horizon where anything at all might happen. And in the old days at least it was rather difficult to check up on them. Perhaps this quality of potential prodigy still lives on in our attitude towards islands.
— John Steinbeck, from The Log from the Sea of Cortez, 1941
In addition to providing great settings for stories, islands have also been a source of fascination and inspiration to biologists. They have had an influence on biology, ecology, and conservation that is far greater than their small areas would suggest. Because they frequently occur in groups called archipelagos, they provide separate but similar environments that have in effect, acted as replicated natural experiments for both nature and the scientists who study it. In the 19th century, Darwin and Wallace's explorations of the Galapagos Islands and Malay Archipelago clearly demonstrated patterns in nature that begged for explanation. It is doubtful that the they would have made their intellectual leaps to the elucidation of natural selection without having experienced those sites first-hand. Islands are like conceptual models: They offer simplified versions of reality. Smaller and less diverse than continents, patterns on islands were easier to see and comprehend.
I. Island Biogeography
In the 20th century, islands were important in advancing our understanding the origin and maintenance of diversity of species. In 1967, Robert MacArthur and Edward Wilson published a book entitled "The Theory of Island Biogeography" that revolutionized the study of ecology and biogeography. MacArthur and Wilson's approach was radical in that it deliberately avoided historical explanations for species diversity and sought to identify and explain more general patterns based upon current organisms' attributes and their relationships to current environments. It also refocused ecological inquiry from simply describing patterns to generating and testing theories that could account for those patterns.
The three island patterns that were linked together by a common theory were:
1. Species-area relationships: Larger islands have more species than smaller ones (there are more places to live, and species are less likely to go extinct if there are more individuals spread over a large area).
2. Isolation: Islands that are farther away from the mainland have fewer species than ones close to land.
3. Species turnover: The number of species on an island tends to remain constant although the identity of the species may change through a process called species turnover.
This last pattern was well documented on the island of Krakatoa, which blew up in 1883. Expeditions to the island following the explosion documented a steadily increasing number of species until 1920, after which diversity remained constant, with species extinctions being balanced by new colonizations.
MacArthur and Wilson's model of island biogeography was presented elegantly and graphically (see Figure), with extinction and colonization rate curves intersecting at an equilibrium (i.e., constant) number of species. In the figure, the low colonization rate curve as would be observed on an isolated (far) island crosses the high extinction rate curve, as would be observed on a small island, at a low number of species (where two heavy lines cross). Similarly, the intersection of low extinction (large island) and high colonization (near island) is at a higher number of species. Combinations of low and high colonization rates yield intermediate equilibrium species numbers.
This model ushered in a new era of ecology by setting out a general theoretical framework with which to interpret the world rather than just noting which species were located where. The use of mathematics in the model led many investigators to think more clearly about ecological problems and to identify which variables needed to be measured and which could be safely ignored. One of Ed Wilson's students (Dan Simberloff) tested the model by fumigating mangrove islands of different sizes and distances from the Florida Keys and then monitoring rates of recolonization, extinction, and equilibrium species numbers. It worked as predicted, and the theory was supported.
Today, island biogeographic theory is providing valuable insight into some of the problems facing conservation biology. It is no secret that increasing human land use has had a detrimental effect on species diversity around the globe. Island biogeography informs us as to how and why it is occurring and how we may best preserve what is still left.
If we think of large patches of undeveloped habitat as large islands, then we can understand that initially they should support a high diversity of species. As development occurs within large plots, they will be effectively divided ad isolated into smaller and smaller islands. What is an ordinary road to us can present an impassable obstacle to some species. A superhighway could be a barrier to all except birds. Even some forest-dwelling birds will not cross open areas. Something as seemingly innocuous as a lawn may be as forbidding as a paved parking lot to some species.
The equilibrium number of species on an island is a balance between colonization and extinction. Smaller islands of habitat will have higher rates of extinction because they will support smaller populations that are more likely to go extinct due to chance. They also are likely to be structurally simpler with fewer habitat types that can support fewer species. Small islands also have more perimeter relative to their area, and this increased edge allows more incursions by predators and parasites. As development continues, the habitat patches will get smaller and more isolated from one another. Isolation makes it less likely that new colonization will make up for higher extinction rates. Habitat fragmentation can continue in this manner until only small patches of habitat with few species remain. Small species may live out their entire lives within one patch and thus be less likely to suffer these effects. Large ones may not be so lucky. Fragmentation is one reason why large predators like bears, panthers and wolves are especially susceptible to extinction.
One partial solution to the fragmentation problem may be to connect island reserves to one another with corridors of habitat. However, some worry that narrow corridors may increase mortality (by increasing "edge") and/or act as corridors for disease as well. Currently, most people think that large reserves are the best bet for preserving species diversity because as large "islands" they intrinsically have lower extinction rates. Having large reserves near or connected to one another could increase colonization rates. Buffer zones of less dense development around reserves may also increase their effective size and connectedness.
If we are interested in preserving biodiversity, and add climate change to the mix, it is harder to remain optimistic. Imagine you are living in an isolated patch of habitat and then the temperature increases. In an ideal, "whole" world you can imagine moving up mountains or towards the poles to remain in your preferred or even required temperature zone. This is not so easy in a fragmented world.
II. Volcanic Islands
Islands are exemplars of the immense creative and destructive powers of geology and time. They are reminders that: "mankind inhabits this earth subject to geological consent—which can be withdrawn at any time" (Winchester 2011).
Many islands are either volcanic in origin or result from tectonic activity. If Tim Burton were to design a baseball, it might resemble our earth with the stitched seams corresponding to mid-ocean ridges and subduction zones that delineate the tectonic plates. Where the plates spread at divergent boundaries, hot spots of lava can force their way up to form islands like Iceland and Ascension (in the south Atlantic). Where plates collide and one dives beneath the other, the subducted plate melts as it is forced deeper. The resulting magma rises and forms chains of volcanos arrayed along arcs on the edge of the opposing plate, like the Aleutian and Japanese Islands.
Another type of island chain can form as a plate is pushed over a hot spot and magma periodically bubbles through and cools. The Hawaiian Islands are an example of a hot spot chain. Kaui is about 5 million years old, while the Big Island is only about 1 million years old. If you are looking for a very long term real estate investment, another island is beginning to rise to its southeast,...
As a child, I remember being intrigued by descriptions of Surtsey, a volcanic island that emerged from the north Atlantic Ocean near Iceland in 1963 (coincidentally, the year I was born). The primal nature of newly ejected hot lava, cooling and eventually making new habitat for many life forms was compelling and full of possibility.
Later I learned of the 1883 explosion of Krakatoa and its subsequent recolonization by life and began to further appreciate not only islands' potential for rebirth but also their fragility and their potential to bring about climate change.
Whether through explosions or mere eruptions, volcanic islands have played a big role in planetary and human affairs by altering weather and climates for extended periods. The explosion of Santorini, (about 100 km. north of Crete) in 1600BC, was 100 times larger than Krakatoa and is thought to have given rise to the stories about the destruction of Atlantis and/or may have triggered the unusual events chronicled in Exodus. The eruption of Hekla 3 in Iceland during 1150BC led to ashes raining down in China and corresponded to a 90% population decrease in the British Isles. The eruption of Mt. Etna on Sicily in 42BC, was well documented by the Romans. In 1783, Laki erupted in Iceland and ejected enough sulfur to choke victims in Europe, perhaps leading to the death of 20,000 people in Britain (de Castella 2010). There was a major eruption of Mt. Asama in Japan in 1784. Together, these two eruptions led to unusual weather that may have precipitated the French Revolution. The explosion of Tambora in Indonesia in 1816 caused "the year with no summer." Krakatoa brightened sunsets for years after its explosion. More recently, Mt. Pinatubo's eruption in 1991, led to a global cooling of about 0.5 degrees for a couple years. The April 2010 eruption of Iceland's Eyjafjallajokul volcano closed European airspace for nearly a week.
Volcanic eruptions are of course natural events, but tell us much about the effects of lofting millions of tons of particulate matter and gases into the upper atmosphere, much as we are doing by burning fossil fuels.
Not surprisingly, one of the most remote islands on the planet used to be nearly barren. Ascension Island's utility as a British strategic naval base in the south Atlantic was restricted by its limited fresh water supply. In a little known story, Charles Darwin and his friend Botanist Joseph Hooker, Kew Gardens, and the Royal Navy worked together to fashion Ascension into a more productive ecosystem (Falcon-Long 2010). Under the scientists' guidance, the Navy planted many different species of trees from the garden and as they took root and grew they began to capture rain, while reducing evaporation. In effect, they dramatically boosted the colonization rate. Like the terraforming Genesis device in Star Trek III: The Search for Spock, the project created a self-perpetuating ecosystem. Today, Ascension is home to an artificial cloud forest that was assembled from a pan-global selection of plants over just a few decades.
From one perspective Darwin and Hooker's plan could be seen as the height of imperial hubris. From another perspective, this island story is quite literally life-affirming. No matter how badly we mess up our island, with a little encouragement, life will somehow find a way to come back.
Like it or not, we humans as a species have become a biogeophysical force. We started small by deforesting islands like Easter island. Later we caused extinctions on islands by over-harvesting (e.g., the Dodo bird on Mauritius), or introducing invasive or predatory species to them (the introduced brown tree snake on Guam is responsible for the extinction or twelve bird species). We seem to be excelling at turning once continuous habitats into isolated, fragmentary, islands of habitat. As they wink out due to warmer temperatures, real oceanic islands may disappear under the waves of higher sea levels as glaciers and icepacks melt and large storms increase in magnitude and frequency. Islands formed from coral reefs may begin to dissolve as increasing levels of carbon dioxide begin to acidify the oceans. Zoological and botanical collections from these islands will remain in museums, and like the legend of Atlantis, be reminders of both their possibility and fragility. We should use the first two (cautionary) tales offered by islands, namely the hazards of fragmentation and atmospheric modification, to avoid having to resort to the measures of the third.
Tom de Castella. 2010. The eruption that changed Iceland forever. BBC News. 16 April. http://news.bbc.co.uk/go/pr/fr/-/2/hi/uk_news/magazine/8624791.stm
Howard Falcon-Lang. 2010. Charles Darwin's ecological experiment on Ascension isle. BBC News. 1 September. http://www.bbc.co.uk/news/science-environment-11137903
Al Gore. 1992. Earth in the Balance: Ecology and the Human Spirit. Houghton-Mifflin, Boston, MA.
Robert H. MacArthur and Edward O. Wilson. 1967. The Theory of Island Biogeography. Monographs in Population Biology. Princeton University Press. Princeton, NJ.
John Steinbeck. 1995. The Log from the Sea of Cortez. Penguin Books. New York, NY.
Simon Winchester. 2011. The Scariest Earthquake is Yet to Come. Newsweek 13 March.
June 27, 2011
Life on a pillar: environmental thought and the odor of sanctity
by Liam Heneghan
The saint on the pillar stands,/The pillar is alone,/He has stood so long/That he himself is stone. Louis MacNeice, Stylite, 1940 [i]
In Moby-Dick; or, The Whale, Melville’s anachronistically recognized ecological masterpiece, a calculation is presented that on a three or four year voyage a seaman manning one of the mast-heads of a whaleship would spend several entire months aloft his pillar above the ship. A whaleship like the Pequod, Ishmael informs, was not provided with a crow’s-nest as was the case with the Greenland ships – the mast-man on the southern whaler was exposed to the elements and to the mesmerizing crawl of the oceans far below him. Our narrator cautions the ship-owners of Nantucket to be especially wary of taking on philosophical lads given to “unseasonable meditativeness”. Whaling could be an asylum for romantic souls, youngsters that are “disgusted with the carking cares of earth”. The cost could be high. Such a youth can lose his identity in his ocean reverie and “[take] the mystic ocean at his feet for the visible image of that, deep, blue, bottomless soul, pervading man and nature…” In such a meditation one misplaced step and “your identity comes back in horror” and perhaps “with one half-throttled shriek you drop through that transparent air into the summer sea, no more to rise for ever.” Ishmael concludes the observation thus: “Heed it well, ye Pantheists.” By which I take it that he is talking to dreamy youth and latterly to us environmentalists.
In chronological sequence Melville mirthfully compares the solitary, watchful, deprived life on the mast to that of other motionless dwellers, starting with Egyptians who climbed the pyramids to gaze at the stars and concluding with stone or metal men atop columns, figures unresponsive to the beseeching yells of those below them, that is, statues of Washington, Napoleon and Nelson. Included in this evolutionary sequence – for the land-locked lofty paved the way according to Melville to maritime mast-men – is Saint Stylites of whom he says “in him we have a remarkable instance of a dauntless stander-of-mast-heads…[he] literally died at his post.”
A helpful footnote in my copy of Moby-Dick declares Melville’s entertaining claim about pyramids as astronomical pillars implausible, and of course, statues, though they may remain impressively motionless for quite some time, have the benefit of being lifeless[ii]. In Melville’s roster, Saint Stylites stands out, so to speak, having spent almost forty years on his pillar.
About him I have a few things I’d like to say.
Just as Melville’s masterpiece can retrospectively be read as an ecological classic – a tale of resource consumption; a disquisition on our relationship with something upon which we both monomaniacally depend and that which will be the death of us: I speak here of nature – there are things we can learn from the asceticism of Simeon Stylites valuable to us as environmentalists. The magnetic force of an ascetic impulse that drew the Stylite up the pillar, and that skewed the balance of his life towards denial rather than affirmation also draws environmental writers to their proverbial mountain tops, and oftentimes swerves our environmental instincts towards chastisement rather than celebration. The cooler air on the pillar-top and on the piney mountain trail is languidly scented with the odor of sanctity. Saint Simeon’s life is so brutal, so macabre, that a close reflection is self-revelatory in the way that microscopy turned on the human body exposes within us both the teeming good and the pathologically bad.
Simeon Stylites installed himself on a pillar constructed on a site of his choosing near Antioch, Syria, and lived there for thirty-six years until his death in 459 AD. This can be regarded as one of the more terrifying historical examples of a modest ecological footprint. Simeon remains a revered saint, though it is clear that he shocked many of his contemporaries. Today he serves as an example of the bewildering nature of the early Christian ascetic impulse. Nevertheless, his self-renunciation was so extreme and his self-mortification so unsavory that most modern commentators disavow him. To suggest that the modern environmental movement shares this same ascetic impulse may seem gratuitous. I try to show that the comparison is useful, and do so not in a bid to scupper environmentalism (I am, in fact, a committed environmentalist) but rather to contribute to a more honest discernment of our environment motives.
I start by recounting in modest detail the extraordinary and ghastly details of Simeon’s life.[iii]
Simeon was born in 388 AD in Sis near the northern border of Syria in what is now modern Turkey. His early interest in Christianity was stimulated, some say, by hearing a talk on Jesus’ Sermon on the Mount. He entered into monastic life quite young, perhaps around the age of sixteen. Asceticism was especially prevalent in Syria in early Christian times where eremitic monasticism (solitary anchorites) was more common than in Egypt where coenobitic, that is communal forms of monasticism were favored. Accounts of Simeon's initial feats of austerity and the responses of his fellow monks remind us that he was extreme at a time when spiritual rigor was already quite pronounced. In addition to more conventional forms of asceticism: fasting, sleep deprivation, standing for lengthy periods and not washing, he invented a range of self-mortification techniques that put him in an ascetic class of his own much. For instance, when others in the community finished their nocturns he would hang a heavy stone around his neck as penance while his brothers slept. One night he fell asleep with this apparatus about his neck and injured his head. To prevent this from happening again, he procured a “certain round piece of wood” which would roll from beneath him if he nodded off. [iv] In addition to the asperities already mentioned he also innovated by tying a rough fiber around his waist (in one account, it was the rope from the monastery’s well that he wore) which abraded the skin and produced noisome smells, and had him shedding worms into his bed.
Many of the stories told about Simeon can be classified as hagiographic nonsense. For instance, he was challenged by some of the monks to test his faith and trust in God by grasping a red-hot poker which he did with without harm to his hands. Perhaps the moral of the story is that what protected him from incinerating his hand was that “he despised them (i.e. his hands).” Even his abbot, to whom his chagrined and apparently jealous brothers complained, found his fervor disconcerting (though the community may have been irritated by his flaunting of the monastic rule; indeed, more simply it may have been the smell of putrefaction that so disconcerted them). When the abbot asked the youthful Simeon to account for the vigor of his practice the young monk replied, quoting scripture: "Behold, I was brought forth in iniquities, and in sins did my mother conceive me" (Ps. 50:7).
Ultimately Simeon was forced out of the monastic community and became a hermit living for three years in a hut at Tell-Neschin. There he spent the whole of Lent without eating or drinking, a practice that became habitual for him. He broke his Lenten fast with the Eucharist host which returned him to vigor. Another austerity from this period was standing in prayer for as long as his legs could hold him. He perfected this and the claim is that he would stand in prayer for the duration of lent. From the hut in Tell-Neschin he moved to a rocky platform near Antioch and spent five years standing there. After this he moved to his series of pillars. His first pillar was nine feet high, but it was replaced by a series of others, each taller than the last. Ultimately, the progressively ascending Simeon lived fifty feet or so from the ground and was visible throughout the region, attracting a large congregation of the faithful and the curious.
The list of his spiritual services performed from the rocky platform and from his successively more prodigious pillars is a long one; harlots were transformed into vessels of virtue, the blind saw the light, hunchbacks were straightened, heathens were converted to Christianity, lepers were healed, the exsanguinating possessed were relieved of their demons. All the while our hermit is strenuously attacked by satanic forces which came in all forms, including that of a lustful camel!
One final nauseating story: as the “king of the Arabs” (more correctly, a Saracen) approached our saint’s pillar, a worm fell from a necrotizing tumor on Simeon’s thigh and the king picked it up. He touched it to his eyes and heart. The saint declared, appropriately enough, that it is “a stinking worm, fallen from stinking flesh” and in consternation asks why the king was soiling his hands. The king however regarded the worm as a blessing and on opening his hand found the worm transformed into a pearl." This allegory prompts to ask how we might manufacture a pearl from the tortured life of Simeon. What is the meaning of all of this? What general principles can be deduced?
Ascetic deprivation is a price paid in flesh for metaphysical rewards
Simeon’s turned his back on this world so that he could gain access to that other world: a heavenly one with the angels. In his early monastic life Simeon submitted to the coenobitic rule of the house (though not without chafing at the rule as we have seen), praying in common, celebrating the Eucharist together – the typical trade of earthy freedoms for heavenly reward. The pillar was something different. It is hard not to see in the pillar a more direct emulation of the Christ’s passion. The pillar can be seen as representing the mountainous heights of Christ in the wilderness and the ultimate stasis of Christ on the cross – an emulation that one can term “the prophecy of behavior”, a term coined by Professor Susan Ashbrook Harvey of Brown University to illustrate the significance of Simeon’s actions as powerful in their symbolism.[v] Simeon on the pillar can be seen as an aggressively literal form of standing before God. In his introduction to the translation of the lives of Simeon, Robert Doran locates this practice within the exercises of Gnosticism[vi]. Gnostics, Doran, reports have been referred to as “the immovable race”. Standing before God result in what is termed “immovability”, achieved by means of a visionary ascent to the transcendent realm. For this removal to the heavenly realm Simeon acquitted his debt with ulcerated feet and maggoty flesh. The suggestion is not, I think, that Simeon was a Gnostic, it is just that in his ascetic ascent and his aggravated immobility, he reinvented gestures that hitched him to another world beyond the tears and tribulations of ordinary mortal cares. Asceticism is reproduced both by emulation and by the types of intuitive rediscovery found in the life of Simeon.
We know of Simeon through what was written about him by his contemporaries and those who came after him, but other than the few snatches of conversation reported by his biographers (often regarding his worms, it might seem) we do not have his direct account of what motivated him. A clue though from the Antonius biography: as a youth in church Simeon inquires of an old man about what is being read and learns that it concerns “the control of the soul”. Pressing his elder further, he is told to:
“reflect on these things in your heart, for you must hunger and thirst, you must be assaulted and buffeted and reproached, you must groan and weep and be oppressed and suffer ups and downs of fortune; you must renounce bodily health and desires, be humiliated and suffer much from men, for you will be comforted by angels.”[vii]
Asceticism relies upon the acquisition and application of expert knowledge
Ascetics are called to special vocation – the life in the desert is not everyone’s cup of tea. Thomas Merton, a monk and occasional anchorite of more recent times, writes of the special nature of desert hermits’ lives in the early Christian centuries in the introduction to “The Wisdom of the Desert” his slim but compelling volume of the sayings of the desert fathers.[viii] Those more loquacious fellows had more to say than Simeon about the application of spiritually expert knowledge towards to end of achieving closeness with God. A dramatic account of the purpose of ascetic knowledge is given by Abbot Joseph: when Abbot Lot asked him what he should do in addition to keeping the rule, and applying himself to prayer and contemplative silence, Abbott Lot rose, his hands extended towards the heavens and his fingers “became like ten lamps of fire.” He said: “Why not be totally changed into fire?”[ix]
Merton calls the wisdom of the desert “a very practical and unassuming wisdom that is at once primitive and timeless.”[x] This wisdom concerns self-discovery regarding the spiritual journal – discoveries that Merton describes as “more important than any journey to the moon.” The wisdom of the desert is simple in philosophy but is quite voluminous: I will give just a few examples. Abbot Hyperichius instructs that it “is better to eat meat and drink wine, than by detraction to devour the flesh of your brother.”[xi] Less obscurely Abbot Pastor said that “a life of ease drives out the fear of the Lord from man’s soul and takes away all his good work.”[xii] Again, Abbot Pastor” “[if] you want to have rest here in this life and also in the next, in every conflict with another say: Who am I? And judge no one.” Perhaps you had to be there.
A more technical account of ascetic wisdom can be found in the Philokalia, a collection of texts written from the 4th to 15th centuries, deemed especially important in Eastern Orthodoxy.[xiii] There, a more complex theological lexicon is employed. In order to achieve the end of “being comforted by angels”, or achieving a greater closeness with God, the desert father marshals the following skills: “discrimination”, the spiritual gift of discriminating between the types of thought entering the mind, with the purpose of achieving “discernment of the spirits” – which thoughts come from God and should be cleaved to, and which from the devil; “intimate communion”, the freedom of approach to God; “Watchfulness”, a state of attentiveness where one carefully watches over one’s inward thoughts and fantasies – the state is linked with purity of heart and the rigorous application of the virtues and results in stillness (hesychia) in which one listens to God and can open to Him.
Ascetic deprivation secures a measure of temporal power
The hagiographical exuberance of Simeon’s vitae with their massive iteration of Simeon’s improbable miracles becomes tedious in its pietistic adulation; nevertheless the examples testify to the intercessionary power of our saint, and provide a roster of critical community needs. Surrounding Simeon on his pillar was a fairly dense agricultural population, reliant on reliable irrigation systems. This was a community concerned about disease, drought, crop productivity, and the depredations of large predators. A saint should be able to regulate the elements and master nature.
The equations of ascetic algebra typically balance the significant intercessionary power of the holy man against the self-mortification of his body. Great power is equated with great corporeal contempt. One wins a spiritual war not by inflicting the most violence, but by sustaining the most damage. For Simeon to accumulate the reputation that he did one should expect staggering penance of this flesh. And this, as we have already seen in part is what we find.
Ironically, each incremental rise of Simeon on his pillars, motivated, according to some biographical authorities, by a desire to get away from the throngs and closer to an airy solitude, increased his visibility and attracted more onlookers. Nevertheless, Simeon served this community through his miracle-working, and his fame and influence spread throughout the Christian world.
The ascetic then is marked by i) a commitment to rewards in another realm, by ii) the deployment of an expert’s knowledge in achieving esoteric goals, and by iii) the achievement of certain temporal authority, despite the ascetic’s declared intent. My list is illustrative rather than exhaustive. The problem to which asceticism is the proposed solution is solved by a suite of regularly recurring behaviors that we should also note – an initial departure followed by a commitment to immobility in another place; a rejection of civilization, through a commitment to a new rule; a distain of the city; physical austerity; a preference for raised ground, though ascetics often start their career at lower altitudes (Simeon, for instance, lived down a well for a while after leaving the monastery).
If asceticism was simply a matter of self-mortification then we could claim that we have never lived in more ascetic times. We diet to shed those dozens and dozens of unsightly pounds; some voluntarily submit to a surgical ablating of the flesh for the purposes of fabricating the perfect nose; our star athletes allegedly undergo a period of sexual continence before the big game; some of you may even gallop on scorching days for distances in excess of twenty-six miles, for no better reason than to replicate the achievement of the first person to die from that feat. And in general terms the definition of the ascetic as a person who practices “rigorous self-discipline, severe abstinence, austerity”, might tempt us to smuggle the more excessive of these modern deprivations under the definitional bar. However, the OED qualifies the definition by pointing out that asceticism aims are achieved “by seclusion or by abstinence from creature comforts”. Furthermore, the term derives etymologically from the Greek asketikos, meaning monk or hermit and more generally the root term is ascesis – the practice of self discipline, or exercise. If, in the final analysis, the contemporary mortifications listed above seem to fall short of being ascetic, why might we, in contrast, regard environmentalism as fundamentally so?
To use the life of Simeon Stylites as a point of comparison with environmental thought and practice may be a challenging place to start to make a case that environmentalism is foundationally ascetic. Certainly there are more temperate ascetics, ones who like St Antony of Egypt (231-356 AD) traveled to the wilds there to meditatively dally, but after decades alone returned to society, at least in the sense of taking many disciples under his care. In other words, there are ascetics whose practice might be more appropriately compared to Thoreau’s sojourn at Walden Pond. Perhaps one might compare tree-stylites like John Muir perched in a storm-tossed Douglas Fir or Julia Butterfly Hill residing in her California Redwood to the ascetic sadhus of India, who, practicing what is called urdhamukhi, dangle out of trees. In the case of Hill, she lasted two years; as for the Muir and the sadhus, the latter who dangle upside-down, their tree dwelling lasted a matter of hours. And so on; one might look for a milder ascetic counterpart for Robinson Jeffers dyspepsia concerning his fellows, preferring you’ll recall, to “sooner, except the penalties, kill a man than a hawk”; one for Ed Abbey’s hilarious but curmudgeonly defense of inaccessibility for Arches National Monument in Desert Solitaire; one for Paul Ehrlich’s discomfort in an ancient Indian taxi (“People visiting, arguing and screaming…. defecating and urinating”) prompting his writing of The Population Bomb; counterparts even for the simple-living needed for ecological footprint reduction, for the belt-tightening required by sustainability, and for the meat-eschewing dicta of environmental vegetarianism. In all of these examples there is a whiff of asceticism but none requires the foot ulcerating commitment of standing on a pillar for decades. So why Simeon?
As we have seen most definitions of asceticism are vague to the point of admitting too many members into the ascetic fold –skipping a meal or two does not the ascetic life make. The vitae of Simeon Stylites, however, distill his life to the point where there is little to notice other than ascetic fervor. As discussed, the examination of his life allowed us to enunciate some principles, and to register the suite of dispositions associated with the ascetic. These included a commitment to rewards in another realm, a deployment of an expert’s knowledge in achieving esoteric goals, and the gaining of certain temporal authority, often despite the ascetic’s declared intent. The dispositions include a departure from “home”, followed by a commitment to immobility in another place, a rejection of civilization which is typically accompanied by a distain of the city, often physical austerity, and a preference for raised ground. The life of an ascetic is the life of critique. In this we not only see the odd particulars of our saint’s life, but also, I think, if one squints a little, the life of the environmental movement.
Space prohibits a full treatment here of how the ascetic drive underpinning environmental thought and action unfolded over the past century or so. Using the principles and dispositions just enunciated some of this should be fairly obvious; other points are more obscure. Sustainability measures, fairly obviously, call (justly) for a deferment of pleasures right now, for an equitable world in the future; Paul Shepard and David Abram mourn the passing of the Pleistocene or indigenous worlds; nature-lovers almost everywhere incline towards inhospitable places; John Muir, Henry David Thoreau, Ed Abbey, Charles Darwin (even): all left though some returned to tend their flock; the mountains beckon to Gary Snyder, David Brower, and to Arne Næss; Garrett Hardin, Paul Ehrlich, and Bill McKibben all demand reproductive self-limitation; Rachel Carson, Terry Tempest Williams, and Al Gore are outraged by what our times have wrought; eight biosphereans spent two years in the bubble of Biosphere 2 (like Simeon they had their support "disciples"); Aldo Leopold and Martin Heidegger had a great fondness for the nostalgia of shack-dwelling. And those not in shacks prefer, like Melville's mast-men, and Simeon, life en plein air - leave absolutely no one inside!
And I agree with them all, in many ways at least. My point, and it seems curiously feeble to me to say it, is not that the ascetic impulse is always wrong, though most contemporary writers disapprove of the Simeon’s vigor, or that environmental thought is wrong when it tends towards asceticism (it certainly is not, but our priorities need to be refined). Rather, I am interested in a more straightforward accounting of the motivations and the behavioral reflexes of environmentalism – where it is ascetic let us call it so; and when our ascetic impulses lead us astray let us reconsider. At its worst the ascetic disposition of environmental thought has translated into calamitous action – for instance, inhumane population policies, unjust removal of peoples from their traditional lands. Less tragically, but still detrimentally, the comfortableness of the environmentalists’ ascetic disposition coaxes the “eco-cete”, the everyday ecological monk, into an unbalanced preoccupation with conservation in wilderness areas, a neglect, until quite recently, of the city as a site for conservation, an often ruthless demarcation of the human from the wild, a nostalgia for worlds that have passed if they ever existed at all, a great nausea towards domesticated humanity – that is, most of us, an over-confidence in an expert knowledge of the natural world, a puzzling relationship with technology, and finally (for now) a snooty distain of those who cannot articulate the environmental convictions in the professional lingo of the movement.
Now, an objection to my claim (one of many, no doubt) may be that there is, quite obviously, no direct link between the life of Simeon and other pillar saints and the mainstream of environmental thought. However, the ascetic impulse is an ineradicable component of who we are – the human without some ascetic impulse (even if it is expressed in a diminished key) has not been born. We do not simply copy ascetic gestures, we all seem capable of ascetic innovation. In some movements – religious, philosophical, environmental –they may simply express themselves more blatantly. To illustrate the idea that ascetic gestures can converge, consider this. There is evidence of a phallus worshiping cult in Northern Syria sometime before Simeon’s time and centered about 180 km east of Simeon’s pillar. According to the Greek author Lucian, men would climb the phalli two times a year for a period of a week and “commune with the gods” and bring good fortune to the community. Though the period aloft was not reckoned in years, nevertheless the phallus dweller remained awake for the duration. If he fell asleep, a scorpion climbs up the column and “treats him unpleasantly.” [xiv] So, long before Simeon’s time worshipful clambering up phalli was commonplace. This has led some to suggest that his ascetic practice was merely an emulation of pagan practice. Several Simeonists are outraged and take pains to deny the connection. The issue is moot from my perspective. It seems that when a saint sees a phallus or a pillar he knows just what to do. That, my friends, is the ascetic impulse. And if environmentalists are up there with them, hoisted up their own proverbial pillars, at the very least the view should be clear; it may be time for us to clamber on down, and lead the community as many ascetics have also done.
[i] MacNeice, Louis (1940) Stylite, Poetry, Vol. 56, No. 2, p. 68
[ii] Melville, H Moby-Dick, W. W. Norton & Company; Second Edition (October 2001)
[iii] There are three accounts of Simeon’s life available, one written by Theodoret, Bishop of Cyrrhus, a contemporary of our saint, one by his disciple Antonius, and the so-called Syriac Text, the longest account of his life. Translations are available in a convenient volume by Robert Doran (1989, The Lives of Simeon Stylites Cistercian Publications). There are conflicts between the accounts and not all of the stories are shared between all of them. Indeed, there are some accounts in the broad literature on Simeon that I draw on but which may not be canonical.
[iv] Frederick Lent (1915)The Life of St. Simeon Stylites: A Translation of the Syriac Text in Bedjan's Acta Martyrum et Sanctorum, Vol. IV: Journal of the American Oriental Society, Vol. 35 (1915), pp. 103-198
[v] S. Ashbrook Harvey (1988) The Sense of a Stylite: Perspectives on Simeon the Elder Vigiliae Christianae Vol. 42, No. 4, pp. 376-394
[vi] Doran, 33.
[vii] Doran, 88.
[viii] Merton, Thomas (1960) The Wisdom of the Desert, New Directions
[ix] Merton, 50.
[x] Merton, 11.
[xi] Merton, 32.
[xii] Merton, 62.
[xiii] Palmer, G.E.H., Sherrard, Philip, Ware, Kallistos (translators) (1979)The Philokalia: The Complete Text (Vol. 1 - 4); Compiled by St. Nikodimos of the Holy Mountain and St. Markarios of Corinth. My paraphrasing of the definitions of technical terms relied upon the glossary from these volumes.
[xiv] Frankfurter, David T. M. (1990) Pillar Religions in Late Antique Syria. Vigiliae Christianae, Vol. 44, No. 2, pp. 168-198
June 20, 2011
Just Right Goldilocks
by Wayne Ferrier
In the constellation of Libra is Zarmina’s World, the first habitable planet discovered outside our own solar system. Zarmina’s World orbits Gliese 581, a red dwarf star that is about a third the mass of our sun. It's about 120 trillion miles away, which in the scheme of things is right smack in our neighborhood. Using current technology, it would only take us several generations to make it there—not outside the realm of our current capabilities. The two scientists who discovered Zarmina’s World, Steven Vogt and Paul Butler, calculate that there could be as many as one out of five or ten stars in the universe that might have Earth-like planets in the habitable zone. With an estimated 200 billion stars in the Milky Way alone, there could be as many as 40 billion planets that could potentially harbor life here. However, this is all very speculative just how common these Earth-like planets really are in the Milky Way.
Temperatures on Zarmina—for convenience sake let’s call it Zarmina—get as hot as 160 degrees and as cold as 25 degrees below zero, but in between “it’s shirt-sleeve weather," says co-discoverer Steven Vogt of the University of California at Santa Cruz. And the low-energy dwarf star Gliese 581, Zarmina’s sun, ought to continue to shine for billions of years, a lot longer than our sun will, which increases exponentially the likelihood that life could possibly develop there.
It's unknown if there is water on Zarmina, and what kind of atmosphere it actually does have. But because conditions there are ideal for liquid water, and because there always seems to be life on Earth where there is water, there is a lot of excitement being generated about the discovery of this Earth-like planet. But that’s the catch—does it have liquid water and the kind of atmosphere that really would make it really, really habitable?
Astronomers like to use the term “Goldilocks zone” to designate the area of a planet’s orbit that is neither too close nor too far from its star so that liquid water can exist on its surface. So far we only know of six Goldilocks planets, and three of them orbit Gliese 581. Two of these planets showed promise, but one planet turned out to be too hot and one planet too cold. “The [third] one bracketed right in the sweet spot in between,” Vogt said. “It's a beautiful planet," so he named it after his wife; unofficially of course. The other three planets orbiting in a known Goldilocks zone are—you guessed it—Venus, Earth, and Mars. And like the Gliese 581 system, Venus is too hot and Mars too cold.
Mars Too Cold
Is Mars really too cold? Well it’s a tad more complicated than that. Mars could be a rather decent place to live if he were a bit bigger, if he were geologically active, if he had a denser atmosphere, and a functional magnetosphere. But because of its small size the planet cooled prematurely and shut down; volcanic eruptions slowed to a simmer, dampening most volcanic outgassing, and switching off the planetary dynamo. Although Mars today has no structured global magnetic field, observations do suggest however, that parts of the Martian surface crust have been magnetized, and that alternating pole reversals of its dipole field occurred in the Martian past. But because four billion years ago good ole Mars cooled, switched off his magnetosphere, that obnoxious solar wind now interacts with the poor Martian ionosphere, reducing the tiny planet’s atmospheric density by stripping away his atoms, one by one, and sending them flying into space. The surface pressure of Mars is equal to the pressure found twenty miles above Earth's surface—less than 1% of our Earth's surface pressure. If Mars were a bit bigger, he might have retained his internal heat longer, and this internal thermal heat would still be driving crucial geophysical processes, such as volcanic outgassing, which help build up and then maintain a dynamic hydrosphere. A larger size would have helped Mr. Mars retain his atmospheric gases, which are now being lost to space. Liquid water cannot exist on most of the Martian surface because of this low atmospheric pressure. Mars’s two polar ice caps, however, do appear to contain a fair amount of water. It’s been estimated that if the volume of water-ice in the south polar ice cap alone melted, the entire Martian surface would be flooded by 36 feet of water! But instead of being the warm, wet world it could be Mars is a cold desert, with a thin atmosphere, and its surface water locked in permafrost.
Venus Too Hot
Venus is Earth’s evil twin, both planets have a similar mass, volume, and distance from the sun. But while Earth is almost paradisiacal Venus went a little crazy, and she more closely resembles Hell than Heaven.
Exactly how and why Earth and Venus turned out so different still perplexes the scientific community. The average surface temperature on Venus is 860 F— hot enough to melt lead— and Venus has a crushing surface pressure, equivalent to the pressure found hundreds of feet below Earth’s oceans. And it rains sulfuric acid there, but it’s so hot it evaporates before it ever reaches the ground!
Because Earth and Venus share a similar size and shape, scientists assume they probably have similar planetary structures such as a core, a mantle, and a crust. But unlike Earth, which generates a strong magnetic field, Venus has only a weak magnetic field.
Like Mars, Venus may have been similar to Earth in her younger days. Researchers think that Venus may have possessed an ocean or two, but not anymore. So what happened? This miserable planet probably lost her H2O because she hasn’t a robust magnetic field; and this weak little field may have allowed hydrogen to leak from the planet’s atmosphere. In this possible scenario, water molecules floating around in Venus’s upper atmosphere would be broken down into basic hydrogen and oxygen by the ultraviolet rays from the sun, and the lighter hydrogen carried off by the solar wind, leaving the heavier oxygen molecules behind. In this way Venus may have been robbed of her water. That would also explain her lack of plate tectonics, considering that water is a key component of a successful plate tectonic system like the one we have here on Earth.
The weak magnetic field could be responsible for Venus’s hot temperatures as well. With Venus losing hydrogen all the time, all the free oxygen in the atmosphere would pair up with carbon instead, making a lot of carbon dioxide. Venus’s atmosphere is about 96 percent carbon dioxide—Earth’s atmosphere less than 1 percent. If you have learned anything about greenhouse gasses and global warming, you can imagine a 96% carbon dioxide atmosphere, and this should clue you in on why Venus is such a hellish place.
Our wayward sister's lack of a strong magnetic field is surprising given that Venus is only a few hundred miles smaller than Earth. But a good, functioning dynamo has three requirements: a conducting liquid, sufficient rotation, and convection. Venus’s core is thought to be electrically conductive, while its rotation is often thought to be too slow. Simulations hint that however slow it might be, it might be enough to drive a dynamo. This could lead us to conclude that the dynamo lacks convection in our sister’s core. On Earth, core convection occurs in the liquid outer layer because the bottom of the liquid layer is hotter than the top. On Venus, a global resurfacing event may have shut down her plate tectonics and led to a heat flux reduction throughout the crustal layer. This might have then caused her mantle temperature to increase significantly, thus reducing the heat flux out of the core. The result no dynamo! Instead heat energy is being used to reheat the crust. But if all this is true it is probably the result of a lack of surface water. Which came first? Did our twin sister lose her robust dynamo because she lost her water or did she lose her water because she lost her dynamo? We need to learn more about Venus. I do believe there may be no other place, besides the Earth itself, that can teach us more about ourselves than this world can.
The two primary factors that appear to make a planet habitable are its size and distance from its star. Distance from the Sun and an atmosphere help regulate surface temperature. For habitability, surface temperature needs to be within the range where liquid water can exist on the surface. Both factors: size and distance contribute to climatic stability on cozy, planet Earth. Unlike Mars, Earth's relatively large size allows her to retain much of her internal heat, which drives a very active geology. Plate tectonics then recycle carbon and other elements that would otherwise be trapped on the surface and recycle them back into the hydrosphere. The greenhouse gas carbon dioxide is a major thermal regulator here on the planet.
About two-thirds of the air in our atmosphere lies within 10 kilometers from the surface. This atmosphere helps protect us from short-wavelength radiation coming from the sun. Carbon dioxide, methane, and water in our atmosphere keep the Earth relatively warm. Without this greenhouse effect, the average surface temperature on Earth would be well below the freezing temperature of water.
In addition, Earth has a strong magnetosphere which deflects most of the charged particles from the solar wind. In the absence of a protective magnetosphere, solar wind can strip planets of their vital atmospheric gasses. I am really very excited about the possibilities extant in Gliese 581 of the constellation Libra. I like the juxtaposition of the Goldilocks worlds Zarmina and her two wayward siblings and Earth and her two wayward siblings. Comparing and contrasting what we know and what we have yet to learn is going to teach us more than we ever could have imagined.
June 13, 2011
Writing for Machines
by James McGirk
Writers are anxious about the Internet and all things electronic, as we worry these newfangled ways of entertaining ourselves might someday obviate our own work. The solution, perhaps, lies in understanding and adapting to this new medium. Consuming enough that we can master its complexities and render appealingly intelligent confections for our readers. But who are these readers? Are they different online than they are in print? Some of them aren’t even human. There is a new form of reader browsing the Internet. For this is no longer just the age of mechanical reproduction; we now have to contend with mechanical readers as well.
William Gibson, who coined the term “cyberspace” imagined it as a mass consensual hallucination, rendered as a cityscape, the prominence of each shape on the horizon an index of how much data was passing through a single point; a point which in 1982 a reader might have thought of as a mainframe computer, and what today, nearly thirty years later, we might identify as an html address or site. On Gibson’s Internet Google would glow the brightest, soar the highest; be an Empire State Building to the Internet’s Manhattan. Most users don’t look at the Internet by volume, however, they read it pane by pane, navigating from bookmarks or through searches, feeding keywords into an ‘engine,’ a series of algorithms, to retrieve lists of linked addresses to the information they seek. These lists are customized to the user, the results tweaked by the user’s location and previous searches. The more searches you make, the more information about yourself you reveal, the more customized the experience becomes.
From a content provider’s point of view (as opposed to a more passive content user’s point of view) an ideal Internet browser might render something close to Gibson’s landscape of crystalline data sculptures, were there a way to capture such information in real time. But commercial users would rather see traffic than the mere through-output of bits and bytes. Who consumes what information, when and why is much more important to commerce than mere bandwidth. Though online sales have grown to become big business, the Internet remains a popularity contest. The real currency of the online world is attention. Being able to read the flow of attention online would mean mastering it, and reaping the ad money that comes along with that attention. But instead of trying to follow where everyone is going all at once, content providers are instead attempting to clone their readers’ minds.
As you navigate the Internet, the Internet – which is to say the entities using the Internet – navigate you. This isn’t a benign process. They want to learn as much about you as possible to snag your attention; not only by viewing content, but by diverting your time into loops of advertisements and possibly even pushing you through a point-of-sale and taking your money directly. They do so by gleaning information about you. Where you go, what you search for, what type of computer you are using… Websites leave small tracking codes on your computer called cookies, and each of these transmits data back to home base.
Keywords (also known as index terms) are the most interesting and valuable traces left by users. Cookies record the terms users use to come across a site. An entire industry has sprung up to interpret these keywords, and another to optimize content online so it can be better read by search engines (this is called Search Engine Optimization). The data they gather is a crude simulacrum of their users; an inscription of their desires for an instant. Almost like a section of brain tissue. A clue. And en masse a hologram of their users collective desires.
All writers crave attention and respond to their readers’ desires. Charles Dickens used live audiences as focus groups for his serialized fiction. Newspapers and magazines have always had to respond to circulation numbers. Electronic texts simply speed up the process. Text online can be altered immediately. There are even advanced analytics packages that use keywords and cookies to anticipate what readers want and automatically generate ‘content’ for users in response to what they ‘perceive’ readers as wanting. Other companies use similar algorithms to assign stories to human beings. When you hear the term content farms, that’s what’s going on.
Google tweaked its search algorithms a few months ago, which trimmed back the custom-generated content that had begun to choke its search results like kudzu. But beyond the first or second page of results, it comes sneaking back and you will still find page after page of sites that copy the content of other sites, or ones loaded with all the correct terminology of whatever it is you seek, but arrayed in such a way that these phrases convey little or no meaning. As replications of our desire, these simulacra are incomplete. It would take an infinite amount of data (and a correspondingly infinite amount of time to collect this data) to accurately model a human being’s wants and desires. But machines are getting closer and closer.
There are gaps between reader and author in a traditional text too. Enormous ones. Between the platonic ideal an author holds in his or her head, the text he or she extrudes into type and the reuptake and processing that takes place in a reader’s head, there is plenty of room for strange, unexpected effects to creep in. William Gibson described the cyberspace generated by a child’s calculator as a grey infinity utterly empty but for a string a few basic arithmetic equations (slim structures of liquid crystal one imagines). This unnameable sea of grey emptiness is not neutral. More of a field or something we project into and allow things to assume shape. And distended from the platonic ideal and warped by exterior forces these things become strange. Even arithmetic has its unexpected, subjective aspects. Many a calculator screen has been reversed to spell mild profanities.
Knowledge builds on memory, and all information builds off what we already know. Reading works by drawing parallels with memories, essentially unpacking an archive into that grey arithmetic field mentioned above and letting it take new forms. The way a machine reads, in this respect, is no different. Software has an archive of its own, a database that it is adding or subtracting from. It 'reads' by comparing its archive to a text, and then updating itself. An author can access this archive with his or her text; and the more sophisticated it is, the more he or she can manipulate it; perhaps even creating an aesthetic experience.
Literary forms are beginning to emerge in response to automated reading systems, searches, and databases. Online, an era somewhat akin to the pamphlet-strewn amateurism of18th Century America is in bloom. The most exotic forms can be found on the Internet’s wild fringe, in its anonymous and pseudo-anonymous chat sites. Here there is a frantic economy of monikers, memes and spoofed identities. In online forums such as the semi-anonymous Somethingawful users compete to create the catchiest, most innovative forms – most often an evolution of an earlier idea, name or other fragment of an idea. The best innovators become famous within their tiny little spheres. Other forums are anonymous and ephemeral – the most famous of these being the notorious 4chan/b ‘Random’ board – where the only recognition earned is the sheer longevity of a creation. A post can only survive as long as it is replied to. Then it is gone forever.
The best memes were once charted on the now-defunct Encyclopedia Dramatica. But now there is no reason at all to create but sheer artistic thrill. Although ‘board lore’ has developed a concept somewhat akin to ‘duende’ – a dark, nihlistic reward in the form of amusement known as ‘lulz.’
The evolution of the online literary form could well come from manipulating these mysterious semantic mechanicals. They offer the opportunity to make writing dangerous again. With the proper keywords, information is taken up into automatic readers belonging to some very interesting entities, to the point where there can be real world consequences. As a way of experimenting with this form I created a series of posts with keywords that I imagined might appeal to some of the more peculiar gleaners out trolling for information online. I posted lists of oil rigs, information about espionage, created a consulting company specializing in complex shipping orders in the Arabian Ocean, wrote about electronic warfare, and laced my work with other ‘edible’ keywords. I received visits from hedge funds, multinational banking concerns, the department of defense, oil companies, environmental organizations, the Pakistani government, the Kuwaiti government, the Iranian government, the Russian government, an unacknowledged US military facility, and a few mysterious hits from ‘Cabin John, Maryland’ (a park across the river from CIA headquarters).
I don’t think my posts ever stirred more than a few pixels. All I did was conjure another layer of anxiety about the online world, but for a writer paranoia is far better stuff than anxiety over obsolescence.
April 25, 2011
Science sheds light on population history and living standards
by Omar Ali
In a sense, all modern historiography includes the attempt to find objective facts rather than relying on folklore and opinion. To varying extents, a scientific mindset is part of the intellectual tookit of all modern people and while no person can be entirely rational and no judgment is as perfectly evidence-based as the idealized models would imply, there is a trend towards greater objectivity and a willingness (at least in principle) to change one’s mind if new facts come to light. There is an assumption among liberals (I self-identify as liberal and spend most of my time with others who do the same) that modern liberals are more “science-minded” than conservatives (the so-called “fact-based community”). Whether this is really true has been challenged but I will assume that liberals DO prefer a scientific approach to history and will touch on two examples where science brings objective information to bear upon history. One is genetics, which has transformed our knowledge of the origins and relationships of different human populations. The other is height and what average height can tell us about different populations.
First, to genetics; a few days ago, blogger Razib Khan wrote a blog post about the population genetics of India and what those genetics can tell us about the origins and composition of the people of India. If you have not read that post, you should definitely do so; it is a superb and user friendly (and not overly detailed) example of how recent advances in genetics are radically transforming our view of human populations and their recent and distant history. In some cases, the facts being uncovered are not entirely new or surprising, but in all cases, they provide a level of scientific certainty to debates that previously lacked such certitude. Read another one of his posts (and other related articles) for examples of more detailed and finer scale analysis of the genetic data. These posts focus on India, but similar information (and in some cases, much more detailed information) is available about other populations and all of it is worth reading.
I am not going to spend more time on genetics, since I think Razib and his friends cover this area better than I ever could and I will be happy if you go to those links and start exploring on your own. But genetics is not the only way in which scientific knowledge can impact our view of history.
The other “objective datum” I want to touch on in some detail is height. Human height is strongly influenced by heredity (tall parents tend to have tall children) and the heritability is estimated to be around 80%. That does not mean 80% of your height is inherited and 20% is “environmental”. It means that 80%of the variation we see around us is explained by heredity and only 20% by environmental factors. In other words, if I am taller than you, that is mostly because I have “taller genes” and relatively little of the difference comes from the way I was raised and what I was fed and so on. It is immediately obvious that if the environment is very hostile and variable (e.g. if many people are facing serious starvation) then a lot of environmental variation will enter the picture but if the environment is more uniformly favorable, then every person will reach his or her genetic potential and most of the difference we see will be hereditary. So this heritability is not fixed, it changes according to the prevailing environmental conditions.
Now, if we look at populations rather than individuals, they too may differ because of inherited differences (in this case, “racial” differences in the genetic background) or because of environmental factors (e.g., one population being relatively better fed). If we look at various populations around the world, we find they are not all the same height. For example, the Dutch are taller than the Japanese. Does this mean that the Dutch are genetically taller or does it mean that something in their environment is different and makes them taller (or makes the Japanese shorter)? A few decades ago, our answer would have been unequiovocal; some races are taller than others. Europeans are taller than east Asians or Indians and so on. But as we get better and better data about changes over time, this straightforward judgment becomes significantly murkier. Somewhat contrary to expectations, it seems that much (though probably not all) of the difference in mean height between different populations is environmental, not genetic. This does not mean that some races are not outliers. There probably are some taller races and some shorter races. But the secular trends of the last 200 years provide very strong evidence that differences that seem obvious and immense at first look can disappear or change very significantly as the environment changes. There may yet be racial differences, but the difference may be much smaller than we imagine and we cannot be too sure of them until we have eliminated the environmental factors for several generations, not just one generation.
Height is a relatively easy measure and fairly good estimates of the height of even ancient populations can be made if sufficient number of ancient adult bones are available for study. But such a trove of bones is not always found, so most of the reliable data about the average height of populations comes from more recent times. In Europe, the modernization of armies occurred earlier than in Asia and Africa (a fact that has more than tangential relationship to the wave of European colonization that occurred around the same time) and combined with greater bureaucratic and social sophistication, this permitted good records of the heights of conscripts to become available from the 18th century onwards. As a result, we have fairly good estimates of the average heights of European males from around 1750 onwards. Data for females are harder to find, but by the 19th century those too start to become available. As with male height data, data for females also lag in Asian and African countries. I will focus on male height today, but trends are similar (though not always equivalent) in females.
What these height records reveal is that the average height of populations is not stable. There are clear changes in average height over time and such trends are labeled “secular trends”. “Secular changes” is acutally a better term, since “secular trends” implies a one way change, but in practice the term “secular trend” is frequently used where “secular change” would be more accurate. If we look at the secular tend in heights over time, we find that all European populations (without exception) have seen an increase in height in the last 2-300 years. And this increase is not insignificant. In 1800, the average height of English males was 167 cm (5 feet and 5.7 inches) . At the same time, the average height of German, French and Dutch males was 164 cm (5 feet and 4.5 inches). And the height of the average Norwegian male in around 1750 was only 160 cm (only 5 feet 3inches). There was also a rural-urban divide in England, with the rural population being taller. At the same time, children in urban slums in relatively tall England were a full 20 cm shorter than their upper class counterparts. In other words, the upper classes very literally looked down upon the lower classes. By 1900, British average heights had increased to 169 cm, but by that point the Dutch had overtaken them and their average height was a 170 cm. Meanwhile, the Swedes were the tallest of them all, with an average height of 172.5 cm. At this time, the urban population in England also caught up with their rural counterparts. By 1950, the Dutch were already 178 cm tall (on average) and today they are the tallest population in the world at 181 (or 183) cm, with the Swedes and the Norwegians running close behind. In the same period, the trend in the US has also been upwards, but relative positions have switched. In 1860, the average American White was 174 cm tall (inches taller than their European cousins), but still shorter than the Sioux Indians. By the 20th century, they had left the Sioux behind, but failed to keep up with their European brethren and are now over an inch shorter than the Dutch and the Scandinavians.
But all these are racially similar societies, what about other races? Within Europe, Southern Europeans were shorter than Northern Europeans a hundred years ago, but have had a bigger height gain than their northern cousins and have significantly closed the gap (and many are still growing while Northern Europeans may be plateuing, so the catch-up phase may not be over yet). Asian populations are generally shorter than Europeans, but have seen bigger gains in height in the last 50 years than the Europeans; in short, they too are narrowing the gap. The average Chinese is about 168 cm, the average Japanese 171 and the average Indian only 165 cm. These averages also hide local variation. For example the Sikhs (at 170.35 cm) are taller than Muslims (165cm) and Hindus (164.5) in India, while wealthier people (of all religions) are taller than poorer ones and (somewhat surprisingly) urban dwellers are slightly taller than their rural brethren. When these relatively short populations migrate to more developed nations, they tend to become taller and continue to become taller over the next 2-3 generations.
So what does all this mean? Some conclusions are obvious: Better standards of living translate into increased mean height, almost all across the world (but apparently, not in Tahiti, where there was no change in mean height from 1902 to 1970, perhaps because the Polynesians already enjoyed near optimum nutrition?). These trends have barely begun in some places (India lags behind China, for example) but can be seen in almost every population. And they do not seem to plateau within one generation even if the environment changes abruptly (as in migration to the United States). These trends provide an objective measure of public health and nutrition (thus, contrary to what Fox News may claim, Americans do not enjoy the same average living standard as Northern Europeans). Like most objective data, they are open to interpretation, and they do not always satisfy the biases of any particular ideology. Thus, while socialists may be delighted to see the “American paradox” (that the US population has fallen behind the Northern Europeans even while becoming heavier in terms of weight and enjoying a similar or even higher per capita income) since it indicates that some degree of state socialism is good for public health, they may not be equally comfortable with the fact that in general it is capitalism that seems to have led to the greatest improvement in living standards in human history. North Korea, for example, has managed to fall 11 cm behind South Korea in average height even as the South Koreans have been unabashed capitalist-roaders. It is also apparent that lay intuition about living standards and nutrition does not necessarily match scientific facts. This is because of several factors:
- PRENATAL (before birth) growth is the most rapid growth phase in the human life cycle (in absolute terms; we grow from 0 to 50 cm in 9 months. At no stage in life do we ever grow as much in 9 months). Gains and losses in this period are compensated to some degree in later life, but not entirely. Prenatal influences matter a lot and are not necessarily made up by later improved environmental factors. And it is not just mother’s nutrition, but mom’s physical size which is a limitation. Small uterus= small baby. And so on. Also keep in mind that the egg that is growing into a baby started its life when MOM was still a fetus in her mother’s belly. Human ova (female germ cells) start their life when the female is still in the womb and the same cell matures and mates with a sperm to start a new life. Grandma can have a direct influence on the egg cell that grows into her grandchild.
- Postnatally, the most rapid growth is in the first two years. Human children reach about HALF of their adult height by age 2 (a little more than half for girls, a little less than half for boys). A lot depends on what happens in these first two years. Moving to America when you are 3 may already be too late to make a big difference in height.
- Nutritional influences are much more complex than just obvious starvation. Protein, for example, is a huge factor. Populations that drink a lot of milk are taller than those that do not. Populations that increase their milk intake increase their average height. Populations that eat more meat tend to be taller. Finally, the pre-agricultural hominin diet, while it may have been healthier in terms of the risk of some modern maladies like heart disease, was not ideal for growth. The introduction of cereals around 10,000 BC definitely increased our caloric supply and while Malthusian conditions limited the benefit to individuals, it is worth keeping in mind that it is not easy to get 2400 calories from scrounging for roots and berries in the forest. The introduction of dairy between 5-8000 BC dramatically improved our access to reliable, high quality protein and along with the availability of the meat of dairy animals, has been a huge boon to humans. Of course, the meat does not have to be in dairy farms; it is likely Sioux Indians were taller than better-off Whites in the late 19th century because they had more meat per capita, mostly from Buffalo.
- Rice eating populations tend to be smaller. The prevailing opinion is that this is due to relative lack of high quality protein. But scientifically, we have not yet excluded other possibilities; for example, that there may be factors in rice that directly stunt growth.
- Iron deficiency, iodine deficiency, vitamin deficiencies, all add up, even in people who look well fed. Upper class Indians may have abundant food, but may still be deficient in protein and iron due to religious restrictions or cultural habits.
- Disease plays a big role, especially childhood infections and parasites. Modern people tend to forget how disease ridden childhood was (and continues to be) in pre-modern people.
Finally, it does not mean that some populations are not inherently shorter than others. Some may indeed be shorter. But the point is that we do not know for sure yet for most large human populations. The Chinese, for example, are still shorter than Europeans but their secular trend (increase in mean height over time) has been more impressive than the Europeans in the last 20 years and the gap is still narrowing. They may yet plateau below the European level, but we don’t know where that plateau is. It may be (and probably is) that East Asians and perhaps South Indians (who have a greater ASI component, see genetics links above) really are genetically shorter than Eurpeans, but the same is not likely to be true of “ancestral north Indians” and the difference between Indian and European heights may be much smaller than the current 13 cm gap (5 inches) if Indians get more calories, more protein, more iron and less disease for a couple of generations. We do already know that Greeks and Turks are not much shorter than North Europeans and since they have not plateau-ed yet, we cannot say for sure that they won’t catch up completely.
Keep in mind that there are small populations that seem truly genetically short (so-called pygmies) but even in those populations, the genetic basis of their short stature is only partly known or is still unknown and we don’t know what their height will stabilize at once nutrition and other environmental factors are changed. There are also some populations at the extremes of climate who may have evolved somewhat different body proportions (taller and longer-limbed Masai, shorter-limbed and stockier Eskimos) but in the vast Eurasian landmass, we may see differences in height narrow considerably in the generations to come. And of course, the new global elite is increasingly multi-racial and their descendants are no longer easy to classify into past racial classifications. Still, the mixing of genes among different large populations is a very slow process, so racial differences are not going to be eliminated in the near future. Thus, whatever portion of the population difference in height is truly genetic, it is not likely to be eliminated by biological intermixing within the next few generations. Environment, on the other hand, can change more rapidly and environmental differences in height will likely narrow; and while they do so, they will be a useful measure of living standards.
April 18, 2011
Of Quislings and Science: Reflecting on Mark Vernon, The Templeton Prize and Richard Dawkins
by Tauriq Moosa
Recently, Sir Martin Rees was awarded the most lucrative science-prize in the world, The Templeton Prize. Notice I said ‘lucrative’; not most respected or prestigious, though some indeed do think it is. This prize is awarded because it, according to its official website, “honors a living person who has made an exceptional contribution to affirming life’s spiritual dimension, whether through insight, discovery, or practical works.” It is given to those “who have devoted their talents to expanding our vision of human purpose and ultimate reality” – a sentence worthy of a tacky Hallmark card.
Sir Martin is in the company of £1,000,000 sterling and Mother Theresa and Billy Graham. Indeed, I wonder if that amount is enough to sway anyone, so that he or she is mentioned in the same breath as these fanatics. The point being there is little that is, by definition, about science. The Templeton Foundation and Prize is about promoting notions of the Divine, in whatever loose language you can fathom, using something vaguely non-Divine in approach. If you can anchor your pursuits that effect the world, dealing with sick people (not aiding) like Mother Theresa, or probing the mysteries of the universe with an appreciation for its beauty or possible higher purpose, then you qualify. They’ve melted the solid idea of the theistic god down into liquid form, so it slips through any pretention even when the person awarded the prize is not religious. Like Sir Martin Rees.
If Sir Martin donates it all to Oxfam, I would have little to quarrel with it suppose, except I think any scientist who doesn’t think there’s a conflict between faith and reason or science and religion is wrong. But that’s another discussion. What interests me about this whole episode was not the prize itself but the views that arose concerning the atheist culture wars. I’m interested particularly in ex-Anglican-priest-turned-“agnostic” Mark Vernon’s ever-banal criticisms of Richard Dawkins, as seen here (an ad hominem attack), here (how Dawkins is doing nothing new even though Vernon keeps writing about him), here (when Dawkins praises fellow writer, Christopher Hitchens, Dawkins is promoting hatred), here (Dawkins… groupthink… bus… bad), here (I don’t even know).
I rather enjoyed Dr Vernon’s books 42 and Plato’s Podcasts, so it is disappointing to see this usually clear, clever writer putting on the same performance each time Dawkins is mentioned in an online discussion or in the media. This is especially so when Vernon reflects on Sir Martin’s recent prize and… Richard Dawkins’ stridency. Yes. You obviously made that connection as quickly as I did. Vernon, expert bar none on how Dawkins should conduct himself publicly, has to write something… and it might as well be as Dawkins’ media nanny.
What is remarkable is Vernon’s ability to pull bones from the Rees story to create some fossil of an argument about Dawkins’ stridency. He does this by reflecting on Dawkins’ calling Rees, then head of the Royal Society, a “compliant Quisling” when it came to hosting the Templeton Foundation in the UK. As if uncovering a museum piece, Vernon unveils his newfound argument in the short space allotted him in the Guardian. But at the end, it’s as though we were expecting to discover a newly discovered species (of argument) but were given merely two broken teeth from a creature we’ve all dealt with before.
What’s more disappointing is the teeth barely draw any blood.
Firstly, why is it necessary to raise Dawkins’ past comment, now nearly a year old? Dawkins’ comment was about Rees and Templeton, so Vernon says it appears as “Rees has seemingly hit back”. This, however, makes no sense since Sir Martin did not choose to give himself the prize. It was, um, awarded to him. Maybe we can say he “hit back” by accepting it. But not everyone’s life revolves around Richard Dawkins enough to accept a million-pound prize just to spite the brilliant biologist.
Secondly, Vernon claims that Dawkins was wrong to call Rees a “Quisling”. Such rhetoric (finger wave) will not do! “Quisling”, says Vernon, was “hurled” against fascist collaborators during the Second World War; what was Rees collaborating with by agreeing for the Royal Society to host the Templeton Foundation? Says Vernon: “The Royal Society lent its prestige to the Templeton Foundation by hosting events sponsored by the fund, which supports a variety of projects investigating the science of wellbeing and faith.”
Hm. The science of what? Wellbeing… sure. Kind of. I would just put that down to either health or morality. But faith? Should the Royal Society also sponsor the Aries Rising Druids Society to investigate “the science” of astrology? Or perhaps Darkmoon Bloodstar’s “science” of magic? The point is: It’s not science, so why involve the Royal Society? Remember, the award is not restricted to scientists – though it seems, politically, this is a powerful avenue for the Foundation to push for in order to gain some credence. After all it is the major area fast destroying any pretentions toward knowing what you can’t possibly know.
Vernon I think is using “science” incorrectly when saying “science of… faith”. (I also doubt he means investigating religious belief from a psychological or neuroscientific perspective. If he did, he could’ve said it as I have. If this is what he means, I apologise.) For someone who spends so much time talking about Dawkins, even wanting to subject the poor biologist to unfalsifiable assertions to do with human behaviour based on ancient mythology (no, not astrology – I’m talking about Freudian psychotherapy), Mark Vernon has not read enough Dawkins. Consider this irritating paragraph:
Dawkins and Rees differ markedly on the tone with which the debate between science and religion should be conducted. Dawkins devotes his talents and resources to challenging, questioning and mocking faith. Rees, on the other hand, though an atheist, values the legacy sustained by the church and other faith traditions. He confesses a liking for choral evensong in the chapel of Trinity College. It seems a modest indulgence. The ethereal voices of rehearsing choristers can literally be heard from his front door. But for Dawkins this makes the man a "fervent believer in belief". And that is a foul betrayal of science.
Notice it has nothing to do with whether god exists or not, whether there is a purpose to our lives, etc. It’s got to do with Dawkins and Rees’ public image and personal habits. We’ll forget that Dawkins regularly speaks on aesthetic appreciation with some other unfeeling, cold, nihilist atheistic scientists (well, according to Vernon’s judgement).
Dawkins spends much time in The God Delusion discussing his appreciation for beauty in the world, even in choral music. Indeed there are several pages where he quotes his appreciation of the Bible as an important work of literature. Anyone can appreciate the beauty of Gothic architecture and the intricacy and brilliance of cathedrals. This does not make anyone “a believer in belief”. Again, like his use of "science", Vernon does not understand the terms he is using.
The term is from Daniel Dennett’s Breaking the Spell. Writing in the Guardian (of all places!), Dennett explains what belief in belief is. “Sometimes the maintenance of a belief is deemed so important that impressive systems of propaganda are erected and vigorously defended by people who do not in fact share the belief that they think is so important for society to endorse.” Vernon does not himself believe in god but thinks it is important, like Rees, to appreciate that others believe in god. Call it belief squared. (I think “belief in belief” is a catchy but is somewhat obscuring phrase. It doesn't quite capture what Dennett means - or, again, maybe I'm just slow.) Dennett continues:
Today one of the most insistent forces arrayed in opposition to us vocal atheists is the "I'm an atheist but" crowd, who publicly deplore our "hostility", our "rudeness" (which is actually just candour), while privately admitting that we're right. They don't themselves believe in God, but they certainly do believe in belief in God.
He appears to be telling us about Mark Vernon. But, with regard to appreciating that others believe in god, Sir Martin does fit this. So, he is a believer in belief but not because he appreciates choral music and evensong. Vernon has given us the right conclusion but using the wrong premises. Rees qualifies as a “believer in belief” because he:
also claims to be an "unbelieving Anglican" who goes to church "out of loyalty to the tribe". He has criticised Stephen Hawking for arguing that we don't need God to explain the origin of the universe, and supports "peaceful co-existence between religion and science because they concern different domains".
This appreciation or respect for religion because others are religious, however, is exactly what Dawkins was correctly criticising as policy for the Royal Society. Similarly – and again – we must ask for consistency. Would we support those who believed in alchemy as a solution for cures? Would we be happy to say homeopathic parents should not be charged for manslaughter when they implicitly kill their children? These beliefs, like religious faith, also are unwarranted, unsound and have no evidence to support them – they, too have plenty of believers. We do not expect physicians to have an appreciative attitude toward homeopaths, when both are focused on the same area. Why then should a cosmologist have an appreciation toward theologians, also focused on the same area?
No doubt many will claim they are separate spheres, but this is not true. If they are separate spheres, religious institutions wouldn’t be trying to get evolution out of schools; we wouldn’t be having hysteria over pushing the time back as to the birth of the universe. If religion and science were different spheres, I don’t think I would be fighting tooth and nail in my thesis to have policies on the ethics of killing re-considered. If these are separate domains, it is not the scientists up in arms but the faithful. Basically, this discussion wouldn’t need to happen. What can cosmology possibly gain from engaging with Leviticus or Paul’s Letters? What can the Quran contribute to our learning about biology?
Vernon however will have none of this. Respect is needed. Indeed, Vernon proudly proclaims himself as a benefactor of Templeton funding. He calls himself an accommodationist. “I often write about the relationship between science and religion, and have been a Templeton-Cambridge Journalism Fellow, the beneficiary of a first-rate seminar programme organised by Cambridge academics, funded by the Templeton Foundation. But then I love the big questions.”
Right. The big questions. I have and do read some wonderful Christian writers on “the big questions”, like Alasdair MacIntyre, a contemporary moral philosopher who follows Thomas Aquinas very closely. I am currently obtaining a bibliography for my thesis that includes many works, including analyses of Aquinas’ Summa and the positions of the Catholic Church regarding assisted-death. My main reason is not through respect of the God-Did-It formula (explain anything: science, morality and so on, by throwing god somewhere in to your explanation); rather, my reason is out of appreciating the power the formula holds, especially with regard to swaying public policy and opinion on my research area, which is killing and death. I need to know it, inside-and-out so that I can combat it effectively. This means I can engage sociably, amicably and vaguely intellectually (I still don’t understand most of philosophy, because I’m unfortunately not smart enough) without being smug, arrogant and so on. I can do this in my writing and I do it with my opponents.
Why mention “I love the big questions” by associating it with Templeton God-Dit-It engagements? Why mention Dawkins' strident attitude and so on unless Vernon means his appreciation for these sorts of questions somehow makes him better able to engage? There is no evidence to support his attack, nor his view that Dawkins’ attitude is doing anything wrong. This is an annoying paragraph – much like the whole silly article – but it becomes clear why he ended it as he did: to compares himself to Rees. Not in the sense of being as brilliant, but in the sense that both can be appreciative of and toward Templeton policy.
Rees pursues [the big questions], too, through cosmology, a subject that clearly fascinates many for similar reasons. Is there life like ours on other planets? What is the nature of our connectedness with the stars? It is partly for his insights on such matters that he has won the prize. But if he is modest about what can be achieved for religious belief by science, he insists that scientists should not stray into theological territory that they don't understand.
It is Rees' insights, not his evidence or his actual scientific research, that matters: His insights based on his own (very beautiful) writings and communications. It is not his research but Rees' somewhat open-hearted approach to the Divine and majesty within the Universe that won him the prize. Dawkins, remember, can’t appreciate beauty, can’t listen to any kind of music and love anything vaguely religious because he is just a smug, bad person with no social-media skills. If this is Vernon’s opinion, which this piece and those millions of others seem to indicate, then of course Vernon would think that Dawkins is not pursuing the big questions like Rees is.
Again, Dawkins himself has put it beautifully that he considers biology, or rather the study of evolution, to be the most important study of all: It answers “Where did we come from? Why are we here?” It even answers what the “purpose” of life is. None of these questions appear to satisfy many people; some people want grand, mythological narratives of which they are the centre between battling gods and swooping Hans Zimmer soundtracks. Rather, Dawkins would advocate that we consider a warm summer breeze and the sounds of a night garden, the Milky Way and pictures from the Hubble Telescope.
I don’t understand why anyone would want their lives to be out of a story written by Bronze-Age goat-herders. Certainly, because of science-writers like Dawkins and Steve Jones, I am considering studying biology. (Again, I don’t think I can with the brains required and all but I want to.) I agree with Dawkins’ assessment. So, if Dawkins is in the department directly involved in perhaps the most important questions, how does this distinguish him as any different from Sir Martin? Dawkins’ science was through a microscope; Sir Martin’s is through a telescope. Both are wonderful areas, with their own intricacies and theories that display human genius. Both, too, are related.
Vernon goes on to describe some previous prize-winners, like Paul Davies (who actually I really enjoy as a science-writer). “Davies is not [theistic or religious], though he believes it is perfectly valid to pursue questions of meaning in the context of what is being discovered about the cosmos. After all, is it not remarkable that our universe has produced entities within it that ask such questions – namely ourselves?”
Did Dr Vernon, a philosopher, just ask or rhetorically make the Fine-Tuning Argument? I don’t think it needs any more debunking than is already in place. But, let’s remember: Vernon is not a believer. It is doubtful he would be persuaded that there is anything more just because things look remarkably tuned that way. I don’t think it’s remarkable that the universe has produced entities like us since it could not have done otherwise. Vernon is giving a free-willed intention to the Universe. We can say things like “If the amount of carbon was this or that degree off, we never would’ve existed” but why is that useful? It didn’t happen. All those percentages are in place.
Consider: Imagine another universe, very similar to ours. Call it the Fire Universe. On one particular planet, in one particular solar-system, in one particular galaxy, there are conscious life-forms made of fire. They cannot exist on every part of their planet – some parts are too cold; other, ironically, are too hot – but nonetheless, there are certain parts of the planet that are Goldilocks good (i.e. just right). These Fire Beings would no doubt say “If carbon was just one degree off, we never would’ve existed!” But that’s the point! They exist. If the carbon was, say, a degree this way or that, Ice Beings would be there instead posing the same questions. This is not useful for discussions since, to repeat, it didn’t happen. (Notice the irony: Why make life-forms so potentially on the brink of non-existence? Why would a deity make it that life perpetually exists on a precipice. Remember: if all these amazing complicated numbers are just a tiny degree bigger or smaller, we would not have existed. But it also means if they change sometime today or tomorrow, we would cease to exist. I’m uncertain why this is an argument for god’s existence. It seems be the opposite.)
We can therefore point out that Vernon is placating Templeton policy with such a statement. It is not remarkable, Dr Vernon. It simply is. You are not posing a big question as opposed to a banal if not useless one. Ironically, these are exactly the sorts of questions Templeton, being a not-very-well disguised faith-based institution, would be asking: boring, circular and unhelpful for science. Finally, Vernon lops his culture-war grenade into the mix. Out of nowhere comes this sentence, found in the second last paragraph.
That such a highly regarded figure [as Sir Martin] has received its premier prize will make it that little bit harder for Dawkins to sustain respect amongst his peers for his crusade against religion.
He provides no evidence of this. This is something that that silly thing called science can verify. For example: Will Dawkins and other atheist book sales decrease? Will the rhetoric change into the rainbow-unicorn speak that explodes with glitter at the mention of the word “divine” and “remarkable” and “majestic”? Again: Vernon still does not understand that there is little linking science and Templeton save the scientists it has given money to. I’ve noted above that the sorts of questions Templeton tackles are not scientific; rather, it is this kind of hippy-ish, pantheistic wonderment at existence that qualifies one as a potential prize-winner. Templeton won’t give an award to a remarkable scientist who bashes god but, for example, cures cancers. It’s not about the science, it’s about the attitude. For this reason, it has nothing to do with science. Vernon does not appear to understand that. Therefore, it seems unlikely Dawkins will lose respect among his peers because his peers are, um, scientists.
Whether or not they do hold views like Sir Martin’s is beside the point. That’s what will win them the Templeton Prize. It’s not attitude that wins you a Nobel. Vernon’s piece is boring. Very boring. My column is longer than his article, obviously. But this all highlights something quite important that I think needs to be understood about science, attitude, snarkiness and so on. Vernon has made no argument and has contributed very little to the discussion. My aim is to try get Vernon away from talking about Dawkins and back on to talking about Plato. (Frankly, I prefer Plato to Dawkins but then I’m a follower of Socrates. Science is way above me.) That’s where Vernon shines and the more he continues to write nonsense like this, the less I want to read him, no matter his subject matter.
Beware the Worm, and Other, More Obvious Attempts to Manipulate Public Opinion
by Meghan Rosen
On the morning of April 8th, the day the government was scheduled to shut down if a budget deal couldn’t be reached by midnight, Arizona’s Junior Senator, Jon Kyl, made a passionate plea on the Senate floor for bipartisanship. He urged congressional leaders to “bridge the differences between the two parties” and reach an agreement.
The House had already passed a bill that made dramatic cuts to government spending, and it was time for the Senate to follow suit. The problem? Senate Democrats refused to vote for a bill that (among other cuts) defunded Planned Parenthood, and President Obama threatened to veto.
Senator Kyl, however, believed the bill was a reasonable measure to keep the government running; in fact, he said, it was necessary. To him, it just didn’t make sense to shut down the government over a program that cost taxpayers 300 million dollars a year. He wanted to put things in perspective.
For the first few minutes of his speech, Kyl sounded like the second-highest ranking Republican in the Senate leadership should sound: bold, confident, and committed to solving tough fiscal problems. In these (fleeting) minutes, it was easy to see why he was unanimously elected by his party in 2008 to serve as the Republican Whip.
And then he clarified his position. It wasn’t that the amount of money going to Planned Parenthood was too insignificant, in the grand scheme of budgets and deficits, to warrant a government shutdown; rather, it was that 300 million was too much. Why hold up the budget debate for such a costly organization? Especially one that peddles abortions. After all, according to Kyl, “If you want an abortion, you go to Planned Parenthood. That’s well over 90 percent of what [they] do.”
For a senator committed to reducing the deficit, incendiary statements that couple government waste with pro-life indignation are a sign of strength, of unyielding conservative ideals. (And it doesn’t hurt that drumming up opposition to Planned Parenthood is great for generating media buzz.)
And he did generate buzz. Hours after Kyl’s speech, CNN debunked his claim in an interview with the president of Planned Parenthood (3 percent of their services are abortions, the other 97 percent are preventative care.*) Later that afternoon, Kyl’s spokesman explained that the senator’s remark was “not intended to be a factual statement, but rather to illustrate that Planned Parenthood, a organization that receives millions of dollars in taxpayer funding, does subsidize abortions.”
Unfortunately for Kyl, only the first 7 words stuck. Stephen Colbert (Twitter account: @stephenathome) swooped in for the kill with 140 characters, a preposterous pronouncement, and a now-infamous hashtag. His first tweet, “Jon Kyl is one of Gaddafi’s sexy female ninja guards” (followed by the cheeky #NotIntenededToBeAFactualStatement) played on Kyl’s flexible interpretation of fact and fiction.
And the buzz became a roar. In the days following Colbert’s tweet, thousands of people picked up on the hashtag and tweeted their own nonsensical facts, Kyl-style. Today, more than 12,000 have been recorded.
Kyl’s not-so-subtle attempt to influence the budget debate is funny because it’s so glaringly, egregiously false. Fortunately for late-night comedians (and their legions of twitter followers), what Kyl lacked in political finesse, he made up for in unbridled pizzazz. Unfortunately for the rest of us, most manipulations of public opinion are not so obvious. In fact, some may not even be intentional.
During the 2008 presidential debates, CNN broadcasted real-time viewer opinion data in the form of a “worm”: a constantly fluctuating green or red line that tracked its way across the screen in response to live viewer input. The worm represented the average response of male or female Ohioan undecided voters; each participant controlled a small hand dial to indicate their approval at any given point during the debate. Turn it to the right, and the worm courses up (as it usually did when any candidate mentioned being tough on terrorism); turn it to the left, and the worm heads back down.
It’s a simple system, and very popular; most viewers would rather watch debate coverage with a worm than without one. But is the information useful? And can viewers separate their opinions from those represented by the worm? According to a study published last month, the answer is no. In fact, as researchers from the United Kingdom discovered, including worms in televised election debates has an unexpectedly large effect on public perceptions of politicians.
In an experiment designed to measure the worm’s powers of manipulation, researchers overlaid live video feed of a 2010 UK election debate between Gordon Brown, Nick Clegg, and David Cameron with a biased worm (one that was pre-programmed to be pro-Brown or pro-Clegg), and evaluated viewers’ post-debate prime minister preference.
Not only were the researchers able to skew participants’ perception of who won the debate (the group watching the pro-Brown worm believed Brown won, and the pro-Clegg group thought Clegg won), but they could also manipulate the participants’ voting intentions. The worm didn’t just influence undecided voters either; participants who came into the study with a favorite candidate were also affected by its bias, even if they recognized and disagreed with it.
In CNN’s broadcast of the 2008 debate, only 30 people provided real-time responses for the televised worm. Though we’ll never know if the worm affected the outcome of the election, because millions of people watched the debate, the opinions of a tiny group had the potential to sway a large number of voters. Because of this enormous amplification, the study’s authors conclude that the seemingly innocuous worm has the power to distort democracy- even if no bias is intended.
It’s doubtful that CNN’s intention with providing the worm was anything but informational, but it’s not difficult to imagine how a less-scrupulous organization might use the tool. In fact, it might just make voters more appreciative of a good-old fashioned opinion-manipulator. Like the Junior Senator from Arizona, who, you may not have heard, is also an accomplished nude hula dancer**.
*There’s some debate about exactly how to quantify services at Planned Parenthood. In terms of total number of services provided, the percentage of abortions is relatively small; in terms of total dollars spent, the percentage is undoubtedly higher.
Social influence in televised election debates: a potential distortion of democracy.
Davis CJ, Bowers JS, Memon A.
PLoS One. 2011 Mar 30;6(3):e18154.
April 04, 2011
The Great Urination Event and other tales of the Nitrogen Cycle (with a note on why Earth Needs More Mulch)
by Liam Heneghan
Several years into my first large-scale field experiment, I noticed one of the technicians urinating on my experimental plot. It was a significantly worse event than when a cow inserted a hoof into one of my mesocosms in an adjacent part of the Co Kilkenny spruce plantation where I was working. The bovine mesocosm disaster was relatively inconsequential. The mesocosm was an isolated fragment of soil surrounded by PVC walls, open on top and with a collecting vessel below; it allowed me to examine the flow of nutrients through the earth. The hoof merely took one hoof-sized replicate of many out of play. The urination event was more significant; we might have to consider bottling his nitrogen-rich fluid for later analysis and factor it into the work. The technician and his urine had become an experimental treatment, quite an anomalous state of affairs.
The field experiment was a long-term evaluation of the effects of chemical additions, including nitrogen, on soils in a Kilkenny spruce plantation. After a brief interrogation about the technician’s en plein air habits, we were confident that, though several patches of the forest had enjoyed the benefits of his impromptu fertilization treatments, it seemed unlikely that the experimental plots had done so more than on this one occasion. A back-of-the envelope calculation confirmed that this small nitrogen addition was insignificant compared with the 150 kgs of nitrogen per hectare that we were adding to these plots annually.
Although the minor urination event, it turned out, was rather non-calamitous, my fieldwork was related to an investigation of a larger nitrogen calamity: a global experiment that I will call here the Great Urination Event (GUE), which has significant effects on biological diversity, on soil and water quality, and on human health.
Our work back then in the Kilkenny forest was conceived as an effort to evaluate the effects of components of “acid rain” on soil critters and soil processes. Acid rain is a general term for precipitation that is unusually acidic (because of the carbon dioxide in the atmosphere rain is “naturally” mildly acidic). Emissions of sulfur dioxide and nitrogen oxides chemically interact with water in the atmosphere to produce sulphuric and nitric acid which then rain down upon us. Like many human impacts on the environment, our activities are grafted onto natural processes, thereby greatly exacerbating their impacts on a wide variety of parameters. The effects of acid rain include damage to buildings, adverse effects on ground water and aquatic ecosystems, contributions to forest dieback (so-called “Waldsterben”), and implications for soils.
So why study acid rain’s effect on soil arthropods? These animals, typically microscopic, play a significant role in the regulation of soil processes, such as those that determine the release of nutrients from dead organic matter in the soil (e.g. dead leaf litter) needed for plant growth. If these communities were affected by acid rain then this, in turn, would have implications for key soil processes and ultimately for the plant community. The big “innovation” in our work was that we were convinced that much of the impact reported from acid rain studies came not from the acidity of the experimental treatment, that is, from the extra hydrogen ions in the rain (high acid = high hydrogen ion (proton) content), but rather came from the fertilizer effects derived from the sulfur and nitrogen in the precipitation (typically in Ireland, at least, as rain). This was the suggestion of my supervisor, Dr. Thomas Bolger, from University College Dublin. Thus our experiment examined the effects on soil animal of a range of nitrogenous and sulphuric components, some of which are highly acidic, but, importantly, others which are not. Furthermore, we asked if changes to the animal community indirect affected soil processes that ultimately influence all aspects of an ecosystem. And yes, these soil animals were affected in different ways by different fertilizer and acid treatments, which, in turn, affected other ecosystem properties.
Forgive me for writing a couple of longish paragraphs on this – the work described in the lines above encapsulate about six-years of my work life. I used to claim that ours was the last funded study on acid rain in Europe. Not true, of course, in a strict sense. I note with interest, though, that reference to “acid rain” in the title of research papers has greatly diminished over the past twenty five years. In 1985 one hundred and thirty five papers had the term in the title, in 1990 this was the case seventy-one times, 1995, forty-one times, 2000, twenty-seven times, 2005, twenty-four times, 2010, twenty-seven times, and six times so far in 2011. Incidentally, I got my PhD in 1994, just at the time when the term was losing its appeal. Had I been more attentive to such things back then I might have secured a full-time faculty position earlier.
Part of the reason for a diminishment in the use of the term “acid rain” recently relates to our greater regulation of the release of sulfur into the atmosphere from coal-fired power plants and from the implementation of other sound environmental policies. Nevertheless, relaxing of our attention to the issue does not mean we “solved” this problem. The rain remains acidic, and, in particular, our radical alteration of Earth’s nitrogen cycle, an especial contributor to the phenomenon, remains striking.
These days I am not just interested in the effects of added nitrogen on soil critters; I am interested in how disruptions to the nitrogen cycle associated with the Great Urination Event can have implications for the conservation of biological diversity.
Nitrogen, a primer
A short primer on nitrogen and its ecological importance: just the basic facts. Nitrogen is a “limiting nutrient”, the more you add (up to a point at least) the more things grow. It’s the reason we add fertilizers to our vegetable plots; it is also one of a number of reasons why humans eat. Like afternoon cocktails, nitrogen has certain uses but one can overdo it without planning to. You start with a couple and before you know it you’ve offended the dean (another story). Nitrogen fertilizes, and repeated applications can be excellent for a few target plants (like our lettuces), but after a certain point it can result in lowered plant diversity. So nitrogen can be good for the garden; it may not be so good in the lands we set aside for rest of nature. Nitrogen is the most ironic of the elements. It is why consideration of nitrogen is important for conservation – alas, not many conservationists think about the implications of disruptions to elemental cycles for conservation. We are hoping to remedy this.
So, nitrogen: the primer:
Nitrogen is a prevalent atmosphere gas; molecular dinitrogen is 79.1 per cent of the atmosphere’s volume. It is extremely stable in the atmosphere, and yet crucial for living things (for instance, for its use in proteins). Because of its molecular stability in the atmosphere, nitrogen is energetically expensive to coax into a form readily available in the biosphere. Two ways to make this happen prevail in nature: nitrogen can be “fixed” (converted to forms which can then enter the biosphere or the soil) either physic-chemically by means of lightning, ultraviolet irradiation, and combustion, or biologically by microbes (collectively called diazotrophs). Diazotrophs fix nitrogen, introducing nitrogen into the biosphere and into the soil where it becomes available for other organisms. They do so, not out of the goodness of their little microbial hearts, but to satisfy their own nitrogen needs.
These days nitrogen is industrially fixed by a process called the Haber-Bosch method at rates approaching that of microbes and lightning (some authorities suggest that all mechanisms considered, we now fix more that is fixed by natural pathways). The fixed nitrogen is used in fertilizers which ultimately produce much of our food (every mouthful that we consume is flavored with lashings of industrial nitrogen). Just as it is energetically expensive for microbes, fixing nitrogen is also energetically costly for us. So much fossil fuel is used to complete the process that many have quipped that ultimately we eat petrol and natural gas. Spreading nitrogen on soils as fertilizer in excess of plant productivity needs leads to nitrogen leakiness from agricultural soils, with consequences for the fouling of waterways (“eutrophication” is the term used for this). We have increased the spatial scale over which inorganic nitrogen gets distributed, leading to a significant loss of nitrogen in some areas, and “saturation” of nitrogen in the soils of other. Nitrogen in excess of habitual levels can have impact on biological diversity in our conservation lands, which, in some circumstances, leads to accelerated unraveling of communities of biological conservation concern.
To complete the cycle, nitrogen is lost from soil and returns to the atmosphere by a number of routes. It can outgas from the soil under several circumstances; often the nitrogen is lost as a consequence of microbial activity: bacterial going about their metabolic business can preside over the release of nitrogen back into the atmosphere. Other routes include the burning of organic material, including fossil fuels. Most will be familiar these days with the implications of burning of fossil fuels for the release of carbon dioxide, a primary gas implicated in human-caused climate change. When we burn ancient organic matter other gases are also volatilized, including nitrogen oxides. These nitrogen oxides are those that can react with water molecules in the atmosphere eventually creating the acid rain that falls upon us, bathing us in a nitrogen piddle of our own creation – the Great Urination Event.
We are living in the constant drizzle of excess nitrogen. And even when it is not drizzling on us, anthropogenic nitrogen deposition settles on us in a dry form like stardust, though somewhat less poetically.
The GUE, invasive species, and soil ecological knowledge
More can and should, and will be said about nitrogen in all its facets: more about its gaseous inertness in its molecularly doubled, triple-bonded form, in the atmosphere; more on its disinclination, without the energetic coaxing of the lightning spark or microbial processes, to “reduce” and make its way into the soil; of its indifference to the co-rivalry of root and fungal hypha; of its immobilization in organic matter and subsequent mineralization from decay; of its subsequent mobilization into the brownish and aquatically-lined matrix of the soil; of its myriad routes, either the rapid ones, or after long impediment in macromolecular humic compounds, back towards the sky.
I am considering having the nitrogen cycle tattooed on my children – it is just that important.
Much to say, but I take just one strand; one story of nitrogen and the vexations it creates under our poor governance. One of the ways in which we have inadvertently caused mischief with nitrogen is through our facilitated spreading of invasive species. Invasive species are those species that are transported by the humans either accidentally or on purpose beyond their native range into a new biogeographical region. Often the spread of species has minor ecological implications, but it can, in some circumstance, have significant consequences for the nitrogen cycle.
For instance, the introduction of fayatree (Myrica faya) to young volcanic soils in Hawaii was highly deleterious, since microbes in the roots of this tree can fix nitrogen, introducing this novel ecological function into the state of Hawaii for the first time with the result of creating an “invasional meltdown”, whereby the introduction of one species clears the way for the invasion of others and indigenous species go missing. Those species ushered in on the nitrogenously ample coattails of fayatree include non-native earthworms which, in turn, affect the nitrogen distribution which can encourage further changes in the vegetation. Another example, Acacia saligna in the South African fynbos (a system with exceptional bidiversity) is having very significant impacts on the vegetation in the historically low fertility soils of that region. In a study conducted in Cape Cod, Betsy Von Holle (University of Central Florida) and her colleagues found that plots invaded by Robinia pseudoacacia, black locust (a small tree more familiar to many readers, perhaps), appear to be associated with a proliferation of weedy, nonnative species.
Now, I choose this strand of the nitrogen story, not to provoke a debate about the wisdom of removing non-native species (in many cases, for the record, I think it is wise to do so). Rather, I am hoping that it illustrates that the ecological dimension of our nitrogen problems is not just that what we send up in the sky dribbles back down on us in a more caustic form (that is, the more conspicuous part of the Great Urination Event), it is also that we have rearranged the nitrogen cycle in a way that affects things as seemingly subtle as the distribution of organisms on regional scales. Thus, the GUE can interfere with the conservation of our biological resources.
Over the past decade my research has switched from the deposition phase of the Great Urination Event to investigations of the conservation biology subtleties associated with changes in the nitrogen cycle. Our model plant is Rhamnus cathartica, European buckthorn. This small tree is a relative rarity in its native Eurasian range. In Ireland, my own native range, buckthorn is confined, according to my copy of Webb’s “An Irish Flora”, “to rocky places and lake shores; occasional in the West and Centre, very rare elsewhere.” In the Chicago Wilderness region it is the most common woody species. Unlike fayatree or Acacia saligna, or Robinia pseudoacacia, buckthorn is not a nitrogen fixer; it does not form associations with microorganisms that incorporate atmospheric nitrogen into the soil. And yet there is mounting evidence that where it is persistently present, buckthorn is associated with an accumulation of nitrogen in the upper centimeters of the soil. This ostensibly small change is hypothesized to have some grave consequences.
A clue to why nitrogen accumulates under thickets of buckthorn came from an observation that undergraduates Cynthia Brundage, Constance Clay, and I made several years ago that the leaves of this small tree are high in nitrogen compared to other native trees in the systems that it invades. The high nitrogen in the leaf litter translates into a rapid breakdown of leaves in invaded woodlands; in fact, buckthorn leaf litter appears to drag the leaves of other species along with it, to the point that quite regularly, soils under buckthorn thickets are quite shamelessly devoid of a modest coverlet of leaf material. The nude soil is susceptible to rapid erosion and perhaps more seriously, since a huge diversity of life is found in those upper centimeters of the soil, there may be a massive loss of these species as a result of the invasion of this rare European tree. This latter phenomenon may amount to a local mini mass-extinction of soil organismal diversity. And if this is not complicated enough, buckthorn invasion is associated with an invasion of Eurasian earthworms. Much of the upper Midwest in the US does not have native earthworms; all the earthworms are relatively recent introduced. Put another way, if ever you slid on an earthworm on a moist spring afternoon on a Chicago sidewalk, the mucous you scraped off your shoe was probably that of a European lumbricid worm. Earthworms, in turn, change aspects of the nitrogen cycle and so forth in a spiral so complex that all these events, buckthorn invasion, litter loss, arthropod decline, earthworm advance, modified soils, nitrogen enrichment, corkscrew through the region to create a conservation problem that will not be easily fixed. Conjectures about buckthorn invasion in the midwest are being worked out in detail by PhD students Basil Iannone and Lauren Umek.
Attempts to remediate the sort of problems associated with invaders are typically disconnected from a consideration of the Great Urination Event. Although the field of restoration ecology and restorative management is becoming increasingly sophisticated and effective, nevertheless, human intervention on behalf of returning a system to a measure of ecological health often proceeds without direct regard for the way in which soils have been modified by a suite of invading species. The assumption might be that soils will passively follow the plant community, whereas it may be that a return to ecological good fortune requires us precisely to go the other way – start with this soil! Some colleagues and I have named the approaches to conservation problem solving that start from the ground up Soil Ecological Knowledge Systems (SEKS; the adjective is, of course, SEKSy).
As SEKSy as Mulch!
Of course, it does not get SEKSier than mulch! Mulch, typically defined as organic material used as a covering on soil, is ordinarily applied to retain moisture, to suppress weeds, and often to add nutrients to soil. It may seem somewhat counterintuitive to suggest that mulch might be usefully deployed to remediate the GUE. The argument goes as follows. In fertile soils, nitrogen levels are greater than they are in organic mulches, especially woody mulches. Wood is primarily cellulose, lots of carbon, relatively little nitrogen. When applied to a fertile soil the microbes come running like kids to a piñata. In order to utilize the wood they need nitrogen for growth, and since most microbes cannot fix nitrogen from the atmosphere they satisfy their nitrogen requirements by importing it from the soil surrounding the decaying wood mulch thereby immobilizing it, and leaving less available for plants (recall: nitrogen’s “indifference to the co-rivalry of root and fungal hypha”). Although this nitrogen is not taken out of ecosystem circulation indefinitely, nevertheless it can reduce fertility enough to allow for native plants communities to re-establish themselves. It is still early days in research on mulch (or other carbon additions) as a way of remediating the GUE. Sometimes it works, sometimes not. How much to add; when to add it; what should the mulch be comprised of? It is not clear. And, of course, mulch is one of the more exotic ways of dealing with excess nitrogen. I mention it only becuase it is a method that connects GUE with the conservation issues that currently interest me. We need to carefully manage both nitrogen input as well as the effects of excess inputs. This management issue will remain as significant an challange for us as climate change in the coming decades.
Putting this all together it looks like this: Human modification of nitrogen cycle on a global scale is a less apparent component of global change than climate change, or massive shifts in land use, or the intercontinental swapping of species. On the one hand it is impressive: in a manner that only some plucky microbes have been able to in the past, we have modified the cycling of a vital element on a planetary scale. But such achievement comes at a cost. When the nitrogen is farted back into the atmosphere, it is converted there into nitrogenous fluids that drizzle back upon us – the conspicuous phase of the Great Urination Event. Internal, less conspicuous, changes in planetary metabolism, ofter derived from other aspects of global change, can create unforseen consequences. Interaction between aspects of global change, for instance, modifications of the global nitrogen cycle and the spread of invasive species exacerbate the patterns of both in ways that are exciting for scientific research but mischievous when it comes time for a cleanup. A suite of remediate techniques have been proposed.
Today’s GUE suggestion: bring an umbrella.
Follow me on Twitter @DublinSoil for 140 character updates on my columns.
[Photos in order: Gents (by Randall Honold); Pissoir (by Randall Honold); 2 N books and a flora (Heneghan), Oppiella nova (approx 0.5 mm length) (by Claire Gilmore and Heneghan) Mulch pile reseach team (left to right Heneghan, Kim Frye, Lauren Umek, Will Warner, Chris Mulvaney).]
Heneghan, L, Miller, Susan P; Mac A Callaham Jr; Baer, Sara; Montgomery, James; Richardson, Sarah1; Rhoades, Charles C; Pavao-Zuckerman, Mitchell (2008) Integrating a soil ecological perspective into restoration management. Restoration Ecology 16 (4): 608-617.
Sprent, J (1978) The Ecology of the Nitrogen Cycle Cambridge Studies in Ecology. Cambridge University Press, Cambridge.
Stevenson, F. J., M. A. Cole (1999) Cycles of Soil: carbon, nitrogen, phosphorus, sulfur, micronutrients. John Wiley and Sons, New York.
March 28, 2011
Read the Label Before You Buy
by Wayne Ferrier
I was driving home from the gym and stopped at the convenience store to grab a power drink, a crunchy snack, and dinner for the cat. I'm being hypothetical here, I don't really work out at the gym, and I rarely buy snacks at the convenience store, but for the sake of this story indulge me please. I looked around at the myriad of choices, not feeling compelled to comparison shop—it's a convenience store remember—so I grabbed what seemed the most appealing and headed to the cash register. What I had chosen was a bottle of POWERADE, COMBOS and a can of FRISKIES Classic Pâté for the cat. Cats are so suave aren’t they? We eat COMBOS and they have pâté. I had skipped dinner so I would have time to go to the gym. I want to be healthy you know.
Back in the car I tore open the bag and downed a fistful of COMBOS and had a swig of POWERADE. Having gotten my initial fix, I took a moment to glance at the nutritional information that is on the food label. The first ingredients listed on food labels are the primary ingredients in that product. The first two or three are the ones you want to look at closely. Ingredients at the bottom of the list may be in smaller amounts than the first ingredients that are listed.
By now most consumers should be aware of what to look for and what to look out for. Experts have been telling us for years to eat whole grains. But my bag of COMBOS listed Wheat Flour as the first ingredient. That's not whole grain. Well that's to be expected. Maybe this snack food wasn't the best choice to get my daily fiber. So what was the second ingredient? It said Palm Kernel, Palm Oil and/or Hydrogenated Palm Oil.
Hmm, it may or may not have Trans Fat, yet this is the second ingredient. Isn't Hydrogenated Oil supposed to be really bad for you? Doesn’t it supposedly contribute to coronary heart disease and other health problems? And why won't they just tell me if it's in there or not? The third and fourth ingredients are Maltodextrin, and Food Starch-Modified. I don’t know what Food Starch-Modified is but Maltodextrin is supposedly a natural product. It is believed to be more easily metabolized than other kinds of carbohydrates, making it popular with athletes and bodybuilders who want quick energy. It is used as a filler and thickening agent making it a popular ingredient for dieters, because it makes you feel full and therefore you don't eat so much. It also may be good for diabetics who may benefit from Maltodextrin being processed easily by the body, assisting in the regulation of metabolic functions. But that's where the positive info ends and the warning is that in small amounts Maltodextrin is perhaps harmless, maybe even healthy. Long term consumption of Maltodextrin, however, we just don't know for sure. This is not too bad information. I was feeling better.
Moving down the list I saw a host of food dyes including Yellow 5 Lake, Yellow 6 Lake, Red 40 Lake, and Blue 1 Lake. To me consuming food dyes is like playing Russian roulette, the consensus is that we think some may be benign, others we think might be carcinogenic, and many we just don't know very much at all what they might do. I drove home. Curious now, I booted my computer, and logged onto the COMBOS website. This is when I really got concerned, perhaps even a little frightened. Upon entering the site I was greeted with this message:
Find your inner self. Hint: It’s not at the dinner table. Congratulations on your first step towards the Combivore lifestyle, where hearty snacks are always the right choice. Remember being a Combivore isn’t about trendy eating or fad foods, it’s a way of life.
I’m not sure what that means. The way it comes across to me is the company who makes COMBOS knows my inner self, and it ought not to be eating healthy, well balanced meals with my family at the dinner table. My inner self is a brute, a creature whose main diet isn't meat, nor fruits and vegetables. I'm a Combivore, to whom snacks are all that matters. I'm not to pay attention to the latest fads; health fads? Okay, I will admit to you that what was left in the COMBOS bag went straight into the garbage. But what about the POWERADE, that has to be healthy right, with all those electrolytes and all? So here we go. The first ingredient is water. I guess that makes sense. Here’s what was next: High Fructose Corn Syrup, citric acid, and salt. Further down the list are food dyes, namely Blue 1, which is really an intense blue, not like those pale looking colors you see in GATORADE.
I did a quick search on High Fructose Corn Syrup (HFCS) and found that the American Medical Association (AMA) insists that it is no better and no worse than any other commonly used sweeteners. Again I was feeling better. Then I went and checked what Andrew Weil says because I respect his opinion and he is definitely against it. I also checked out Dr. Oz to see what he had to say and he agrees with Weil. The low down is that HFCS is a relatively recent invention and consumption of HFCS in the United States has increased by more than 1,000 percent between 1970 and 1990. HCFS may promote weight gain because it behaves in the body closer to fat than to glucose. According to Weil, there is some evidence to suggest that fructose might disturb the normal function of the liver, and unlike glucose, doesn't seem to trigger the process where our bodies tell us that we are full. Oz further clarifies this by saying that High Fructose Corn Syrup is not recognized by our brains as real food, so we never feel satiated and we keep eating more and more. The result is our blood sugar level keeps rising, and abnormal amounts of insulin are needed to metabolize it, and then we crash and are hungry again. Not recognized by our brains as food!
Oh great! I just ate half a bag of COMBOS with Maltodextrin which gave me the feeling of being full. Then I drank POWERADE, which leaves me feeling perpetually hungry? But what really worries me are those insidious food dyes. They don't draw much attention. I'm no expert, but they really concern me. We really don't know what they are or what they are doing for us or to us. Natural and artificial flavors, Yellow 5, Yellow 6, Red 40, Blue 1, etc. I just don't like the sound of them.
Here's a bit of evolutionary rumination. We evolved to prefer certain nutrients in certain forms. Young primates normally avoid bitter tasting food because many toxic plants contain alkaloids which have a bitter flavor, while sweetness in natural foods is usually an indication of ripe, health-giving fruits and vegetables. Over time adult primates, through trial and error, become savvy consumers knowing which bitter plants are good to eat and which are not. Primitive peoples are often just as discerning about what are good or useful alkaloids from those which are bad and dangerous. Even in modern society drinking coffee, tea, beer, and eating spicy foods and bland vegetables are acquired tastes.
But many food manufactures, it seems, are colluding to keep us perpetually naive, bombarding us with mega amounts of sweeteners and easily digestible carbohydrates. Why sit at the dinner table eating healthy food, which takes time to digest, when you can get what you want quick and cheap at the convenience store? Unfortunately these perpetually available sweets and carbs are also loaded with other man-made substances, which we know very little about.
Rats cannot vomit. That’s a weird fact but true. When a rat eats something it has to digest it, it cannot throw it up. A rat encountering an unfamiliar flavor the first time will nibble then walk away. After a number of hours if it doesn’t get sick, the rat might return and finish its meal. The rat does this to see if what it is eating is poisonous. Manufacturers of rat poison have to make their products appealing to rats yet not be so toxic so that the rat comes back for seconds or so toxic it gets them on the first nibble.
In humans smell and taste might have evolved to give us the ability to identify good food from poison. If it is acidic the food might be spoiled, if it is bitter it could be potentially toxic. Carbohydrates and other simple sugars provide quick energy for primates on the go. Associating sweetness with energy may be behind our present addiction to processed food. Food that was once hard to find is now overly abundant and the rule of nature is that too much of a good thing can be harmful, even dangerous. The very definition of pollution is too high of a concentration of anything. Our supermarkets are cesspools of too much of what we crave. Abundant sources of easily digestible carbs are difficult to find in nature, salt is equally scarce. Food manufacturers have caught on to this and create processed food with the right combination of the goods we want: salt, sugar, fat, etc. The food doesn’t even have to taste good; if the right combination is there people will buy it and consume it.
And that can of FRISKIES? Just for the shock value I would love to tell you that it beats human food hands down but I can't. The ingredients in that can of FRISKIES Classic Pâté are Meat By-Products, artificial and natural flavors, and the omnipresent food dyes are there too. Nobody really knows what Meat By-Products are except the manufacturer. To conjure up images of what Meat-By Products are exactly sounds too much like a horror flick to me, so I'll leave it at that.
We are a society that is caught in the middle of a battle between exploitative marketing and a raging health-kick movement. I am constantly being reminded of the dangers of cancer, cardiovascular disease, stroke, and diabetes. I see the word “cancer” mentioned dozens of times per day on television, in newspapers, magazines, on the Internet—every form of media. Even my Facebook friends are constantly posting warnings and reminders of these maladies and asking me to post them too. Eventually one of these killers is going to get me, but preferably later than sooner. The fear and threat of cancer, heart attack, stroke, and diabetes is fed to me so many times a week I just can't get them out of my head. I really think we need a break from it as it seems that's all we think about these days! We've really become quite a paranoid culture.
Yet a quick trip to the supermarket reveals that there are still a lot of companies out there that have resisted changing the quality of the ingredients in their products. Other companies are bent on fooling us, making us think that their products are healthier when they are really not. Read the labels please, and then don't worry so much about the dangers. I already know the dangers. If a company is making crap why don't we just stop buying it? And if we're not sure what an ingredient is then let us take a lesson from the rats. Wait until the verdict is in, scientific investigations conclusive, and meanwhile choose something else on the way to the cash register.
February 28, 2011
My Little Chickadee
Article and photos by Wayne Ferrier
She says she doesn't really go for millionaires, she rather prefers surfer dudes, SUV guys, et cetera, not Mr. Private Plane, even though she is known as the millionaire matchmaker. She has a hot new DVD out titled “How To Get Married In A Year,” but she is as yet unmarried herself. She has been called the Simon Cowell of dating, known for making quick, straight forward comments. I confess I had no idea who Patti Stranger was when I first saw her as a guest on the Nate Berkus show a couple weeks ago. I don't really watch much TV, and I only had The Nate Show on as background noise, when I heard Patti giving her dating advice to some of the people in the audience. Jen, a pretty blond, was invited to come up on stage, have a seat, and ask Patti for guidance.
Jen had been on a date and it was really, really awkward. Patti leaned forward interested. Jen revealed that they had met for the first time in a restaurant and it was going fine until they started talking about their hobbies. Jen's date said that his hobby was—of all things—birdwatching. Patti's face turned sour, registering first dismay then unabashed disgust. Nate saw Patti's intense reaction to Jen's revelation and burst into hysterical, almost embarrassed, laughter. While the audience roared, Nate's face turned a shade redder and stayed that way though the entire interaction. But this was the exact response Jen was looking for. She squealed, “Yes exactly! Patti, that's why I need help, because I just can't just look at him and say next! So I let him go on and on about this story about birdwatching.”
Patti's advice: “Every girl needs a hundred dollars in a different compartment of her purse called stash cash, you always have to have it, whether you're in a city or a suburb, it doesn't matter, to get out of Dodge. So you get up and you say to him, I think you're a really great guy, I think you're awesome, you're just not my type, I don't want to waste your time, I don't want to hold you up, I think I'm going to get going. But if I know somebody I'll send them your way. There's no wasting time, you're too hot and single to be wasting time on a birdwatcher!”
Jen replied, “Oh that's good!” To Nate's defense he didn't respond much, he wrapped it up and went onto the next question. I was a little taken aback. Are birdwatchers really that revolting? Maybe this explains why I haven't had a decent date in awhile. I'm not really a birdwatcher per se. I don't own a pair of binoculars, and I don't travel to all the best birding sites around North America just to see birds. However, when I find myself in those sites I often approach a birder and ask them a couple questions, as very often their knowledge of the local ecosystem may be rather extensive. I do admit to having a couple feeders hanging near my house. And I try to learn as much about birds as I can. I listen to their songs and attempt to learn who is who and what they're saying. According to Patti Stranger's standards, I don't have a lot going for me. I'm too old, even though I am around Patti Stranger's age. I probably won't become a millionaire anytime soon, unless I get lucky playing the lottery. I haven't been surfing in years. And I'm probably worse in habits than your common everyday variety birdwatcher. I'm into all kinds of nature, not just birds. This may explain my recent bombs on the dating scene. I do just about everything wrong. I once showed up on a date driving a Plymouth Voyager instead of an SUV. I know, I know! And once I elected to go to the Nature Preserve rather than to hike to the waterfall that the girl really wanted to see. Asked what I did over the weekend by another girl, I admitted that I had wasted a perfectly good Sunday studying aquatic invertebrates in a stream in the woods. She never called back. Now you know what kind of a guy I am—one of those who goes on and on about nature.
So here I go again. Relatively tiny, chickadees are unafraid of humans. Even wild caught birds rapidly adjust to captivity. There are more than forty species of them, chickadees and titmice (genus Parus), which are found in most habitats in the Northern Hemisphere. And this is why I'm primarily interested in them. They coexist well with humans. They even thrive living among us. In this way I am different from those environmentalists who worry endlessly over every fragile species and neglect, or even hate, the more robust ones. Chickadees have all those qualities that make them robust survivors. They are small, adaptable, can live close to or away from people. They tolerate a wide variety of habitats across a very large geographical range. Most northernly chickadee populations don't migrate, even in the absence of feeders. Yet they have been more successful than many types of neotropical birds, who are now facing numerous challenges in both their summer and winter sites, and a reduction of numbers.
Staying north all winter, chickadees need to keep warm, and on particularly cold nights, most, if not all, of their fat reserves can be depleted by morning. So they must get up early and replenish. It may seem to those of us who feed them, that all they care about are sunflower seeds. But this is not quite true. In the summer they are mostly insectivorous and include more vegetable matter only during the winter months. Their vegetarian diet might include bayberries, blackberries, blueberries, poison ivy berries, goldenrod, ragweed, sumac, wild cherries, the fruits of tulip trees, and the seeds of coniferous trees. Animal food preferred are caterpillars, even pests like gypsy moths and tent caterpillars, which many other birds won't go for. They also like animal fat (that's why they readily eat the suet you put out), and in the forests they will feed on animals as big as a dead deer carcass, pecking through the skin to get at the subcutaneous fat. They have even been seen feeding on dead skunks.
Natural born profiteers, they prefer quality food over quantity, and will choose sites that offer the best kind of food, rather than any old food, first. Then they like a regular source. If you are feeding them, keep your feeders stocked with quality seed and suet or they may leave and go to a site that's more dependable. On cold and windy days they usually feed at lower heights and higher elevations on calmer days; so if you want more birds around, place your feeders at strategically different elevations. Dominant birds prefer to feed at the safest sites, which means away from predators, like the neighbor's cat.
Many Parids store food. They often prepare the food before it is cached. Insects are usually beheaded and perhaps some other parts are removed as well. Sunflower seeds and suet are also stored. Chickadees can store hundreds, if not thousands, of items per day. They don't stock it in just one place however, but each item is stored in a different location; yet these little birds seem to remember where it all is! Later they may retrieve the items and move them to yet another, safer location, farther away from the food source. They usually start removing the best quality food first, from the primary cache site to secondary or even tertiary cache sites later. (This kind of memory is impossible for humans, as most of us can't remember anything more complected than a telephone number. I have trouble even finding my car keys). Storage sites can be just about anywhere: cracks and crevasses, under pieces of bark, inside curled leaves and in needle clusters of conifers, wedged in the edges of broken tree branches, buried in the ground or beneath snow.
This system of multiple cache sites may help each bird keep its stash away from other birds who might come across it. Birds don't seem to be able to spy on other birds to find their stash though. It seems the experience of physically moving food items around puts it in their memory. Some of the most interesting work on avian memory being conducted today is on Parids.
During the non-breeding season Parids often travel in mixed-species flocks, which consist of insectivorous birds of different species that move together while foraging. These mixed-species flocks are distinctly different from simple feeding aggregations, which are groups of several species of birds at areas of locally high food availability. A mixed-species foraging flock typically has a nuclear species that seems to be central to its formation and movement. Species that trail them are called attendant species. Attendants usually join the foraging flock only when the flock enters their territory. In the North Temperate Zone, mixed species foraging flocks are often led by chickadees and tits, and are joined by kinglets, nuthatches, treecreepers, warblers, and woodpeckers.
There is some evidence of social learning among species—they seem to learn from each other, and to some extent partition out duties, where individuals specialize. And even in mixed flocks each bird might have a contributory duty. For example, woodpeckers benefit being in the flock where they are protected by the always alert Parids who give warning when something is amiss, while the woodpeckers might prove useful because they can dig for insects underneath the rotting wood of dead trees, exposing the goods for the rest of the flock.
During the non-breeding season winter flocks have a relatively stable membership from September to March. Flock sizes are usually larger in the northern ranges than in the south. Regular supplemental food, such as feeders, can increase flock size and several flocks may share a feeder. Though there may be some inter-flock territorial disputes, arguments are more common earlier in the season and become more relaxed as winter progresses. A flock's range is usually distinct and may be as large as forty acres. Flocks are based on some kind of hierarchical system and therefore some kind of individual recognition is going on here. Chickadees can clearly tell the rank of a bird some distance away. Gotta love those Parids. And thanks for listening.
So Patti, that date we had lined up, I don't think it's going to work out. You can save your stash cash. I think you're a really great girl, in fact you're awesome. It's just you're not my type.
Smith, Susan M. 1991 “The Black-capped Chickadee, Behavioral Ecology and Natural History” Cornell University Press.
January 31, 2011
The Secret Life of Cancer
by Jenny White
I’m a faithful reader of the New York Times Science Section, cover to cover, because I want to know about things, not be caught flatfooted. Somehow it seems necessary for survival to know about quarks and bosons, the social structure of ants, scientific explanations of the smile, and the sexual life of grapes. I had a fling with books explaining how to endure being stranded in snow (make an igloo) and identify edible weeds in the park. What does this say about me? I never kept any extra food in the house beyond what was fresh in the fridge until after 9/11 when I laid in some canned beets and tomato sauce and a gallon jug of water. The tomato sauce exploded and the water leaked, so clearly I am batting zero as a survivalist. Perhaps knowing things about the world lets me feel that nothing can surprise me, jump out of the dark corners beyond my peripheral vision. Illness is like that. Two months ago I saw spots and flashes in my right eye and was told I had a partially detached retina. Why? No reason. Out of the blue. Once I was allowed to read again after the repair, I read a lot about retinas. But what do we really learn about how illnesses and the body work from reading popular science? Recently, I had a long conversation with a prominent scientist at Harvard, the molecular biologist Michael R. Freeman, who explained to me what cancer was. It wasn’t anything I expected, even after years of reading science stories. It was as if he had opened a door into an alternate universe. Below is a transcript of part of our conversation.
Jenny White: Tell me what we should know about cancer?
Michael Freeman: Cancer is an uncontrolled proliferation of cells, so a tumor is actually is a swelling or a cyst, something that isn’t necessarily life-threatening, but a malignancy is something that has the potential to grow and spread in the body and its the spreading in the body as well as the growth that is lethal. We’re still trying to understand fundamental processes that are part of cancer. A recently recognized process involved in cancer, for instance, is autophagy, which means “self-eating.” This is a normal way that cells use to conserve energy and nutrients, and it’s a process that can be used by cancer cells to progress to malignant states. Tumor cells generally are in a very stressful environment, so there’s a Darwinian pressure to select for variants that can overcome various stresses. So if you’re a tumor cell and your descendants have the ability to take in nutrients from this process of autophagy, then you have a selective advantage over other cells that might be killed in the stressful environment.
JW: So basically the Pac-Man cells survive because they eat the cells surrounding them.
MF: They actually eat themselves.
JW: Are there any other cool concepts that are out there? Autophagy, self-eating Pac-Man cells. What else is going on?
MF: There’s another concept that was very new when I was a postdoctoral fellow, but is now very much understood to be a fundamental process in tumor biology, which is apoptosis or programmed cell death. This is a program that cells initiate that causes them to die. It’s basically cell-suicide. There are signaling molecules that can initiate the suicide program that’s built into the cell. This is a normal process that takes place during development. The fingers on your hand were created in part through an apoptopic mechanism, where the webbing between the digits was formed by cells killing themselves. In development, in the formation of the body plan, there’s growth as well as loss of structure. It even happens during normal life as an adult. It’s like what a sculptor does, right? A sculptor creates form by removing things.
JW: How does that fit with cancer?
MF: There are tumor cells appearing in your body every day. Your immune system will recognize these cells as aberrations and they’ll be killed. It’s a complicated biochemical process, but basically the cells initiate a suicide program, though sometimes cells arise that are resistant to those apoptopic signals. And this turns out to be a very important reason that you have malignant progression -- you have cells that resist the signals that tell them to die.
JW: And why is that? They just don’t like to be told what to do?
MF: They resist these signals because they have various biochemical pathways inside them that are either activated or disabled. Oncogenes are genes that can cause tumors. But mechanistically what that oncogene might be doing is to elicit, activate or allow certain biochemical pathways that results in a cell that can resist apoptopic signals. You can have a biochemical pathway where A protein signals to B protein signals to C protein, and the ability of A to signal to B is shut down. You have an inhibitor that’s inhibiting the A to B signal. Sometimes to initiate an apoptotic signal, you need a cell-surface receptor that needs to be positioned in a certain way on the cell surface, and it can either not be there or it can be internalized. Genes can be shut down, genes can be activated. There are a lot of different ways to cause this.
JW: It’s amazing that we move around as fully functioning human beings at all if all of these minute things can go wrong all the time.
MF: After being a biologist for many years, I still find it incredible that any organism lives decades when all of the intricate biochemistry that happens has to continue to happen almost flawlessly. You get ill and your body can repair itself. It’s amazing.
JW: What about other cool things? We’ve got Pac-Man autophagy, we’ve got cell-suicide death wishes. What else is going on?
MF: The tumor genome is massively disrupted. Over the course of a tumor’s life, you have chromosome loss, you have gene duplication, you have gene loss, you have DNA rearrangements. The popular culture analogy I like is the Borg, from Star Trek. “You will be assimilated. Resistance is futile.” That’s not a perfect analogy but it shows how things can be reorganized to become virulent.
JW: Except that the Borg use creatures that then become part of them. They don’t kill the creatures.
MF: They don’t kill a creature initially, but you can think of a cancer – including the cells that are disseminated and the secondary tumors that are formed in parts of the body -- as an organism. So the cancer is an independent organism inside one’s body that, of course, is dependent on the host living; it doesn’t have a way to replicate beyond the host. It kills the host and then it dies. But in many ways it’s like an independent organism.
JW: But what is the point of this organism inhabiting you if it doesn’t help it to replicate itself. Cancer doesn’t spread, right? Isn’t it a basic biological premise that creatures evolve in order to seed their own kind?
MF: But it does inside an organism. It’s like a virus in the sense that it’ll replicate inside the organism. Viruses can obviously move between hosts, and cancer cells cannot. But within the universe of the host, it’s very much a Darwinian process. You have selection, you have progeny, you have replication, you have death, you have new variants arising all the time. The new variants are being selected. The difference is that the entire universe of the cancer is the patient, and when the patient dies, the universe ends.
JW: So why would you say it’s one organism? Aren’t you saying it’s an organism with offspring that it sends out to different parts of the body?
MF: Human beings have all these symbiotic bacteria living in their gut and elsewhere on their body, so if you look at yourself as an organism, there’s about ten times more bacterial cells that are part of your body than your actual human cells. So what does that mean? Does that mean you should be defined primarily as a bacterial colony?
JW: That’s gross! Boy, that certainly changes my vision of the human body. It’s almost like we’re a small universe in which other small organisms grow, just like we grow on the earth, destroying it as we go along, using up its carbon and its air and wood.
MF: The bacteria will be fine. No matter what we do to the earth, the bacteria will be fine. They’ve been around much longer than us; they’ve diversified tremendously.
JW: How old is cancer?
MF: I don’t know for sure, but I would say that cancer is ancient and it occurred early in association with multicellularity. Jellyfish are multicellular. They’re about 600 million years old; it’s a very ancient lineage. And it would be an interesting question – I don’t know the answer -- if they have cancer. If they had cancer then that would be evidence that cancer is at least 600 million years old.
JW: What’s the difference between bacteria and a cancer cell?
MF: The cancer cells that arise in your body are genomically very similar to you, so they’re human. The cell is deranged in some ways, but it can be unambiguously identified as human. A bacterium is a much simpler organism; it doesn’t have a nucleus; it doesn’t have chromosomes arranged the way ours are. So it’s a very different type of creature.
JW: But why do you call cancer a creature? It could be like a skin growth or a mole or something that just has grown out of control.
MF: When you look at cancer cells under a microscope, they’re a colony of creatures. They’re clearly independent from their source. If you’re going to call a single-cell organism like a protozoan a creature, then I think it’s perfectly reasonable to call a colony of cancer cells or even a single cancer cell a creature. I mean, it can crawl around, it can eat, it can respond to its environment. It’s respiring, it’s consuming food; it’s replicating. It’s very much alive.
JW: OK, now all of this is giving me the creeps. It’s much more comforting to think of cancer as something that’s not human, that’s just an invader that you might be able to kick out. So the treatments people are using to try to kill the cells individually are really not dealing with the problem.
MF: You can use the Borg analogy. The Borg takes on new abilities over time because it assimilates civilizations. But when you shoot at the Borg, you know, using some advanced photonic device, you kill the Borg. And then the second Borg comes at you and you kill that Borg. And the third Borg comes at you and you kill that one. But by the time the fourth Borg comes at you, the organism has already adapted.
JW: So you’re telling me that not only are there these creatures in their own universe inside the body, but they’re actually able to learn, take on new abilities.
MF: It’s very much like any other Darwinian evolution in that you produce variants and some of the variants are going to resist your attempts to kill them. I saw some data in 2010 from a colleague who had done whole genome scans of between ten and fifteen metastatic tumors taken from one person. You can think of these tumors as all part of an organism, using this analogy we’re talking about. But in reality when you looked at the genome of these tumors, they were radically different. Much more different than you or me, for example. So tumor cells have the ability to alter their genome in all sorts of ways that normally doesn’t occur. The genome is normally very stable. There are some significant changes that happen with sexual reproduction, but for the most part your genome that’s being replicated throughout your entire life is pretty stable. My genome isn’t that different from yours. But the genomes of these individual tumor cells – at least in this particular case – were radically different.
JW: So the tumor cells were different from the cancer patient, but also different from each other. They’re individuals!
JW: Is there any good news?
MF: Well, we know a lot about tumor biology now. I started graduate school in the early 1980s and there’s no comparison between now and then. We have a vast reservoir of knowledge now. We have a much greater ability to identify promising drugs than we did ten years ago. The goal I think of most cancer researchers is to get to the point where cancer becomes a chronic disease and it’s managed with medical therapy. I think that’s what people are shooting for.
JW: So when will all this new knowledge turn into new treatments?
MF: That part of it is very disappointing. The rate-limiting step is the means by which drugs that look promising in the laboratory can be tested in humans. This is very expensive. To move one drug through phase 1, 2 and 3 clinical trials can cost upwards of a billion dollars. The only way that can be paid for is through companies, and companies can decide to proceed with that investment or not. There are many situations where you have promising drugs and they’re not ever moved into clinical situations and tested in real patients because the cost is too high. Pharmaceutical companies have to make strategic decisions based on the bottom line, and a lot of that is divorced from science.
JW: What’s the most exciting thing to come out of your lab in the last year?
MF: Two things. One is that we discovered a gene that controls a process where a cell acquires an ability to move rapidly through tissue spaces by deforming its membrane. This is referred to as amoeboid features. We discovered a gene that regulates that process, one of the first ever found. We think the amoeboid properties are highly relevant to the way in which cells can metastasize. This is potentially a signaling network that controls metastasis.
JW: So the amoeboid form allows cancer cells to basically hail a cab and get around the body more quickly than the usual way of cancer cells creeping through tissue, and you’re taking away the car keys. That’s great. What’s the other thing?
MF: We discovered a new type of tumor-derived particle that these amoeboid cells can spit out. These particles, which are not cells, have biological activity, so they can communicate with other cells. They can circulate through the blood and potentially modify and signal cells very distantly from the primary tumor that produced the particle. And they’re large. The significance of the largeness is that you can find them more easily in circulation, and we’ve also shown that you can actually see them in tissue specimens. Their presence predicts aggressive disease. This is a new type of particle that hasn’t been described until now. Since it seems to promote tumor spread, it might serve as an indicator of aggressiveness clinically, which might improve the ability to target the tumor with specific drugs.
JW: So this particle has the ability to kick-start other cells into turning cancerous.
MW: Exactly. Another thing my lab has worked on for a number of years is the relationship between cholesterol and aggressive cancer. Our findings indicate that high cholesterol is actually tumor-promoting in the case of prostate cancer. The implication is that if you take cholesterol-lowering drugs, you might be able to inhibit cancer in some people.
JW: Do you have some words for people reading this who might now be rather depressed?
MF: I think twenty years from now, as long as we don’t pull the plug on our magnificent research efforts here in the United States, what we know now will probably seem very primitive. So it’s best to be humble.
November 22, 2010
Floods and Plagues: New Lessons From the Old Testament
The late spring/early summer of 2010 was much wetter than normal in West Central Illinois. The sewer backed up into my basement while I was out of town. I returned home to an unmistakable smell and dismissed it as a "freak event" while I cleaned it up. A couple weeks later, I was home during a particularly Biblical downpour. The sewer began to back up again and, despite my best efforts to staunch the flow with a plunger, sewage poured out of my basement toilet with a ferocity that was reminiscent of the elevator scene in Stanley Kubrick's "The Shining" except in sepia-tone. When I called the city to remind them that I paid for sewage to be taken away from my house not delivered to it, I was told that May and June of 2010 were unusually wet and that the city's old-school combined system could not handle it (newer systems have separate pipes for sewage and storm run-off). The voice on the phone told me that we had received 24" of rain in May and June. I checked the weather for 2010: In May we received 11.90" and June 11.78". I checked the climate records: The long term average for May was 4.27" and the previous record for the month 11.29" recorded in 1908. The long term average for June was 4.26" and the previous monthly record of 13.97" had been set in 1902. In other words, in two consecutive months we had nearly equaled or exceeded all time records, which were set over a century ago! This gave me something to think about as I squee-geed, shop-vacced, and Cloroxed my basement for the second time in as many weeks: How does a culture or civilization respond when all of its assumptions about the world (and the resulting necessary embodiment in infrastructure) no longer apply?
The instant flood and prospect of illness presented by the excrement got me thinking about two classic tales in the Old Testament: The Noahic Flood from Genesis and the Ten Plagues of Egypt from Exodus. As a Biologist I get some grief for being a scientist and for Science and Religion being incompatible. On the one hand, science is not known for supporting supernatural explanations of any kind. On the other hand, naturalistic accounts could explain some phenomena that appeared to be supernatural to people of the Old Testament.
I was brought up by a completely lapsed Southern Baptist, thoroughly agnostic father and Bahá'í mother (who was herself the product of a non-practicing Jewish father and non-practicing Catholic mother). Not surprisingly, I decided at a pretty young age that everything in the Abrahamic tradition could be read metaphorically rather than strictly literally, so I was amazed when I began to realize there was a cottage industry of scientists who tried to explain things in the Bible using modern methods and methodologies. If for no other reason than that I could tell people that science supported some of the things in the Bible (and that therefore they were not completely opposed to each other), I began to save some articles and make some notes.
In the late 1990's a pair of geologists published a book that explained the Noahic flood as the flooding of land around the Black Sea as the Mediterranean rose from melting glacial ice sheets and spilled over the Bosporus and offered some compelling evidence to support their ideas. At about the same time, a pair of epidemiologists (Marr and Malloy, 1996) arrived at a plausible epidemiological explanation of the 10 plagues of Egypt. I would like to explore both of these hypotheses a bit and put together my own synthesis.
As the Pleistocene ended about 12,000 years ago, the great ice sheets that covered much of the northern continents retreated and their run-off made the ocean rise hundreds of feet. That this happened globally probably explains why flood myths tend to be universal (Wilson, 2001). Some of these past floods were truly epic. Nearly 12,000 years ago, much of the Columbia River Gorge may have been carved out in about a week, when a glacial ice dam failed, allowing 2,000 feet deep Glacial Lake Missoula to empty at a flow rate of 9.46 cubic miles per hour, which is about 50 to 60 times the flow of the Amazon River (Gould 1980, Glacial Lake Missoula)! The scale of this event was so far beyond the pale, that it was initially dismissed as being incompatible with the Uniformitarianism and Gradualism that are the bedrock (pun intended!) assumptions of modern Geology.
The Noahic flood may be more familiar to many people. Two geologists (Ryan and Pittman, 1998), argued that it resulted from the rising waters of the Mediterranean Sea overflowing the Bosporus and filling the freshwater Black Sea with saltwater. By some estimates, a day's flow over the giant falls at the Bosporus would have a equalled up to a year's flow over Niagara! As the water rose at about a foot a day, the flooding of the low-lying area around the Black Sea led to a diaspora that spread agriculture, along with tales of a great flood, all over Eurasia.
We tend to present science as a monolithic enterprise in which the scientific method is solely practiced. Double-blind experiments with treatments, controls, and replication are the "Gold Standard," and anything else is regarded as lesser or even suspect. Unfortunately, the real world is rarely so accommodating and it is just not ethical to infect half of a population with something nasty while the other half gets a placebo. Epidemiologists have assembled their own set of tools for practicing science within their set of constraints.
Focusing on the sequence and timing of events, and the specificity of symptoms and their causes, Marr and Malloy (1996) present the following argument: The Egyptians at that time were a river and agricultural people. A freshwater "red tide" caused by the aptly-named dinoflagellate Pfisteria piscimorte (Plague 1: Blood) killed fish (a major source of dietary protein) and forced frogs onto the land (Plague 2: Frogs), where they died, and thus were no longer around to control insect populations. Instead, their carcasses provided plenty of food for insect larvae that transformed into the adults of Plague 3: Lice, and Plague 4: Flies. Marr and Malloy believe these insects were Culicoides biting midges ("no-see-ums") and Stomoxys stable flies, both efficient vectors for orbivirus infections that resulted in the death of livestock (Plague 5), and bacterial infections that caused Boils (Plague 6), both of which further reduced dietary protein and left fewer animals and people to practice agriculture with. Hail (Plague 7) killed people, animals, and lodged grain, further reducing food stocks. Solitary locusts, responding to crowding and stress, morphed into migratory swarms and devoured much of what grain remained (Plague 8: Locusts). Sandstorms, likely khamsin (hot Saharan winds) or sobaa (severe, multiday-long storms) caused the Darkness of Plague 9, and covered the wet grain with warm dust and locust feces, which promoted mold growth and mycotoxin production that led to the Death of the Firstborn: Plague 10 (or using a different translation, the death of first fruits or shoots). Whether this final plague involved killing children or new crops is not so important as the idea that this powerful civilization suddenly had no future. The assumptions behind their relationships to the water and land no longer held.
A common theme that ties together The Great Flood and The Ten Plagues is Global Climate Change. As the earth began warming near the end of the Pleistocene; ice sheets melted and the sea level rose, inundating coastlines around the world and some inland areas like those around the Black Sea. Another major prediction of global warming is that extreme weather will become more severe and more frequent. A look at global temperature measurements show a slight bump between 2000BCE and 1000BCE. The Ten Plagues are thought by many scholars to have occurred between 1500-1200 BCE. Warming could have triggered the Red Tide that then precipitated the next five plagues. Warming could also have increased the frequency of severe local weather like hailstorms, and sandstorms (Plagues 7 & 9), which initiated the plagues of Locusts and the Death of the Firstborn. Starvation and disease amplify each other's effects with the result that 1 + 1 = 5.
The end result of the Ten Plagues, was that the Israelite slaves were freed by the pharoah and subsequently fled Egypt. Slavery is not something that we tend to associate with our lives today, but in 21st Century North America we each rely upon 100 to 200 "energy slaves" in the form of fossil fuels for our daily activities. Like it or not, whether we voted for Palin or Obama, we are all pharoahs or plantation owners in that we rely on energy that is not from our own bodies for heating, cooking, manufacturing, and transport. Certainly, depending on black gold is preferable to exploiting black bodies, but is it in our best interests to be so inextricably entwined with fossil fuels? Peak oil and global warming suggest no.
Is oil our slave or are we its slave? That America has spent trillions of dollars over the last decades supporting a military that ensures safe passage of oil through the Persian Gulf suggests the latter. Just as the Egyptians depended on the flow of the Nile to water crops and slake the thirst of animals and human laborers, we depend on an ever increasing river of oil arriving at our shores from all over the planet to supply energy and chemical feedstocks for our civilization.
Of course, the real threat may be in the Carbon dioxide that is released by burning fossil fuels. Just as Abraham Lincoln and Stephen A. Douglas debated the future of human slavery 152 years ago, a few blocks from where I am writing this, we need to seriously address "energy slavery" and its consequences today. Unfortunately, I don't yet see either the political will or insight among any of our leaders.
A new exhibit about human origins at the Smithsonian Institution in Washington, D.C. declares "Humans Evolved in Response to a Changing World," and seems by extension to imply that "we've done it before, we'll do it again," while never discussing the role fossil fuels play in our current situation, or the role mass mortality plays in making natural selection work. Responding to climate change may be one of the factors that drove human evolution, but our domination of the planet has arisen during a period of relative climatic stability that we are in the process of pushing ourselves out of. Even if the average climate stays the same, the extremes will become even more extreme.
Not surprisingly, David H. Koch, billionaire oil tycoon, and climate change denier, underwrote the exhibit just as he underwrote the Mercatus Center at George Mason University, The Cato Institute, Americans for Prosperity, and Tea Partiers, among others (Mayer, 2010). It's his money and he can do what he wants with it, but it seems to me that one individual having so much influence undermines the concept of one man, one vote that underpins our democracy. One argument made by deniers is similar to that made by tobacco companies: We can't do a proper experiment, therefore we can never really be sure about the causal links between and X and Y (substitute tobacco and lung cancer or fossil fuel consumption and global climate change). Against that kind of money and argument, all I can do is point to the geological and historical records, and the epidemiology of Marr and Malloy, which suggest that we have already participated in some global warming experiments with severe results. The great flood chronicles the global rise in sea level that accompanied the end of the last Ice Age, from which refugees fled en masse. Exodus may be in fact describing the first epidemics and epizootics that would be expected when increasing population size mixes with a bit of climatic warming.
I am using science to attempt to confirm and explicate a literal reading of parts of The Old Testament, but one that implicates climate change as a causative agent for floods and plagues. Followers of the Abrahamic tradition could agree with the details of a literal reading but conclude that the causative agent was an angry God for the Great Flood, and one who later came down on the side of the enslaved Israelites by ensuring their emancipation and safe passage to freedom. Recognizing God's omnipotence affirms our smallness. With nearly seven billion people, some of whom have huge ecological footprints, that smallness is questionable today. Positing an all-powerful God may also have the effect of relieving us of any individual or collective responsibility for our actions. Many of the same people who believe in an all-powerful God also deny climate change, yet over the last century and a half, we have in fact achieved God-like power with our ability to change the earth's climate through our activities.
With great power comes great responsibility. Unfortunately, we have not fully owned-up to this responsibility. At the time of Exodus, an estimated 2.5 million people lived along the Nile and environs. Today a large fraction of the nearly 7 billion people live on or near coastlines or rivers. The rest will also be increasingly susceptible to floods and plagues that will be realized in a warming world. Recent events in Haiti and Pakistan may well be a preview of coming attractions. The effects on other species will likely be catastrophic as well. Minimizing the causes and effects of these changes will likely be the central challenge to humanity in the 21st Century.
As for my basement, the city informed me that a check valve installed on the line between the house and the sewer main would prevent future sewer back-ups. It will not be cheap, but it may be the first serious external cost of global change that I have to pay for directly. I hope it will be the last, but don't think it will be.
Stephen J. Gould. 1980. The Great Scablands Debate. in The Panda's Thumb: More Essays in Natural History. Norton. New York.
John S. Marr and Curtis D. Malloy. 1996. Epidemiologic Analysis of the Ten Plagues of Egypt. Cadaceus. (12):1: 7-24.
Jane Mayer. 2010. Covert Operations: The Billionaires Who Are Waging A War Against Obama. The New Yorker. August 30.
William Ryan and Walter Pitman. 1998. Noah’s Flood: The New Scientific Ideas About the Event that Changed History. Simon and Schuster, New York.
Ian Wilson. 2001. Before the Flood: The Biblical Flood as a Real Event and How It Changed the Course of Civilization. St. Martin's Press, New York.
November 15, 2010
My memory is not the greatest there is but, someone once asked me the question “Why did you and your wife decide to have children?” What I can remember is that I thought it was a strange question at the time and I was somewhat taken aback and didn't quite know how to answer it. I assumed that basically it was just what all of us did if given a choice and if we were capable1. This is the answer I gave and I wasn't very happy with the explanation at the time. Since then I've had time to think it over but it wasn't until recently when reading an article about “the technological singularity” that I was able to formulate a much better answer.
It will essentially be a point in mankinds' existence where everything prior to that time was known and more or less followed Moore's Law2 and everything beyond that time will be unknown due to the fact that we can not know what super intelligent beings will do. 3
In my opionion, the inevitable outcome of the technological singularity will be the creation of more complex artificial beings by their predecessors. In other words, we will create intelligent artificial beings who will in turn create more advanced artificial beings and if we were to extrapolate that process there would be no end to the creation of beings so advanced they would resemble nothing we can possibly imagine (I am in no way receiving any form of retribution for recommending “The Age Of Spiritual Machines” by Ray Kurzweil but that book should be required reading for all kindergarteners. Ok, maybe second grade. If you haven't read it, go get it).8
I came to this conclusion based on the need for life as we know it today to reproduce. It would stand to reason that life, whether artificial or real and tangible4, has a need to create more life. I will take that one step further and state that life has a need to create more advanced life.
Any creature on this planet teaches their young offspring to recognize the threats of predators so that they may survive.. They also provide them with food and water until they are ready to learn how to do so themselves. These tasks are advancing their lineage, creating stronger, smarter beings, teaching them the skills necessary to survive and future generations depend on their survival.
We can think of it as evolution on a micro versus macro level. I wouldn't say that the evolution of the human brain obeys Moore's Law but maybe there is a carbon based life form equivalent.
With that in mind, I suppose a more appropriate answer to the original question “Why did you and your wife decide to have children?” , is that we wanted to create a better world. By teaching our children what we know, we are adding to their knowledge. By showing them known dangers and teaching them to avoid our mistakes, we are creating more advanced beings. We always want our children to be better off than we were. This doesn't have to be a strictly financial situation. We want our children to be smarter, funnier, happier, more helpful and more caring then we ever were. We want them to have an artistic sense that creates stunning visions and a musical ability that makes us cry with joy when we hear their fingers gliding with each note or their soft voices as they melt our hearts. We want to create scientists, musicians, actors, doctors, nurses, care givers, lawyers, mathematicians, social workers and farmers5 with good intentions and strong wills. Honesty, Integrity, Lovingness, Intelligence, Reasoning, Forgivingness, Love Thy Neighbor, Love Thy Country. This is what I hope for all of our children.
I think I have done a reasonable job at explaining why my wife and I decided to have our two adorable children but for those of you with a large family (you know who you are, and apologize ahead of time), you need to come up with a great explanation or a visit to the psychiatrist may be in order. 6
I find it very appropriate to tell you the reader that as I am typing this I am witnessing a brawl between my four year old son Matteo and two year old daughter Emma7. Actually it involves Matteo running through our kitchen and living room with a Tootsie Pop knowing full well that Emma can not eat one because she is too little (after I told him to keep it “hush hush”). I have the urge to grab that lollipop and hurdle it out the back window of our lovely home. But with enough red wine medicine to calm my inner Doctor Frankenstein, I sit quietly, watching them, laughing and smiling at the beautiful creations I am blessed to have had a part in making.
1 I understand there are some who chose not to have children and this article is not meant to offend those people. Please excuse a semi-senile 35 year old man.
2 Moore's Law predicts the growth rate of computing power. See wikipedia article: http://en.wikipedia.org/wiki/Moore%27s_Law
3 See wikipedia article: http://en.wikipedia.org/wiki/Technological_singularity
4 I hate to use the comparison of Artificial versus Real because who is to say that we as humans are not some artificial carbon based experiments created by some unknown being but I hope you forgive me for the sake of advancing the story. I prefer to use the term Software versus Hardware.
5 I tried to run the whole spectrum of professions but, I hope you get the idea.
6 I hope I don't need to point out sarcasm at this stage of the game. Please no angry hate mail from John and Kate. If you need to check out a very busy household, go to the following link: http://www.duggarfamily.com/aboutus.html
7 Matteo will be five years old in April and Emma will be two years old in December
8 Photo is of Ray Kurzweil from wikipedia
http://en.wikipedia.org/wiki/File:Raymond_Kurzweil_Fantastic_Voyage.jpg(Creative Commons Attribution Generic License applies)
October 25, 2010
Statistics - Destroyer of Superstitious Pretension
In Philip Ball’s Critical Mass: How One Thing Leads to Another, he articulates something rather profound: statistics destroys superstition. The idea, once expressed, is simple but does not stem its profundity. Incidents in small numbers sometimes become ‘miraculous’ only because they appear unique, within a context that fuels such thinking. Ball’s own example is Uri Geller: in the 1970’s, the self-proclaimed psychic stated he would stop the watches of several viewers. He, perhaps, twisted his face and furrowed his brow and all over America watches stopped. America, no doubt, turned into an exclamation mark of incredulity. What takes the incident out of the sphere of the miraculous, however, is the consideration of statistics: With so many millions of people watching, what was the likelihood of at least some people’s watches stopping anyway? What about all those watches that did not stop?
Our psychological make-up seeks a chain in disparate events. Our mind is a bridge-builder across chasms of unrelated incidents; a credulity stone-hopper, crouching at each juncture awaiting the next link in a chain of causality. To paraphrase David Hume, we tend to see armies in the clouds, faces in trees, ghosts in shadows, and god in pizza-slices.
Many incidents that people refer to as miraculous, supernatural, and so on, become trivial when placed within their proper context. Consider the implications of this: Nicholas Leblanc, a French chemist, committed suicide in 1806; Ludwig Boltzmann, the physicist who explained the ‘arrow of time’ and gave us the Boltzmann Constant, committed suicide in 1906; his successor, Paul Ehrenfest, also committed suicide, in 1933; the American chemist Wallace Hume Carothers, credited with inventing Nylon, killed himself in 1937. This seems to ‘imply’ a strong link between suicide and science. Of course, as Ball indicates himself, we must look at the contexts: We must ask what the suicide-rating of these different demographics was in general: of Americans, Europeans, males, and any other demographic.
Ball shows that in the 19th- to 20th-century Austria of Boltzmann and Ehrenfest, suicides were quite common: ‘[Suicide in Austria] claimed the lives of three of [the philosopher] Wittgenstein’s brothers, [the composer] Gustav Mahler’s brother, and in 1889, the Crown Prince Rudolf of Austria.’ Seen in the ‘light of the relevant demographic statistics’, says Ball, Ludwig Boltzmann’s death does not indicate something special about suicide and science. Statistics made this incident banal by removing it from isolation; statistics returned these strange facts about the Austrian scientists and their suicides into a context that bridged the chasm where the miraculous or spectacular are birthed. Statistics helps us show the echoes in this Chasm of Credulity harmonise with a larger context, helps us weed out the isolated incidents before they grow into poisoned fruit of proclamations of superstitious awe. Science seeks ways to bridge, if not narrow, this Chasm of Credulity.
Whether the incidents are psychic-telephone calls or astrology charts, nearly all can be minimised, and thus emptied, of their pretensions. Bloated anecdotes of precognitive abilities are drained when we think of their corollary: how many more times have you thought of someone and the telephone hasn’t wrung? What are the chances of several hundred people’s watches stopping in a crowd of a few million? With the millions of combinations of baked dough, tree bark, and mountain cliffs, perhaps it’s more likely for us not to see face in these various phenomena. Statistics can aid us here, bringing us back down to earth, instead of drifting among the clouds of make-believe.
To make sense of this, consider the ‘birthday problem’: what are the chances that, in a small group of people, any two share a birthday? Let us assume a group of 30 people and there are 365 days in a year. Two people must share one of those 365 days. Thus, we first work out the total possible combinations of two people’s birthdays if they asked each other: that would be 365 x 365 x 365 … for the number of people. That means 36530, which is a massive number. This is the denominator. We can now calculate the number of matches that are not birthdays, working our way backwards to figure out the probability.
Person #1 states his birthday. Person # 2 has 364 days to choose from, Person # 3 has 363, and so on (remember, the birthdays do not match hypothetically). An image useful in considering this is Person # 1 drawing a red cross on yearly-calendar, Person # 2 doing the same in the available spaces, and so on, until thirty people have done it. That is working your way down. So, in trying to work out how many people do not share a birthday, we have to say 365 x 364 x 363 … and so on until you’ve done it 30 times. Thus, we write it as follows:
‘N’ equals the amount of participants and ‘!’ indicates a factorial, which works its way down as we indicated above (365 x 364 … 336 x 335). This is the numerator for our example.
Now, we simply combine our figures.
We are left with: [365!/335!]/365^30
According to the calculations, we should get: 0.2936. Remember this is the chance of people not sharing a birthday. So, the chance of sharing a birthday is inverted (1 - 0.2936): making it about 70%, between two people, in a group of 30.
Using careful calculations we encounter a counter-intuitive conclusion: in a group of 30 people, the chances that two people share a birthday is above 20-, 50- and even 60-percent. On face value, not many of us would probably think the chances that high. This shows there is actually nothing remarkable or special or spooky about two people sharing a birthday, considering that cold calculation indicates the likelihood being more than a coin-toss.
How does this reflect in superstition? Using the horizon this little but wonderful example provides, we can eclipse all manner of abysmal superstitious exclamations: What were the chances that we would meet again? What was the likelihood that I should win the lottery/win at Blackjack after I wore my lucky-jacket, prayed to my god, etc.? What were the chances of recovery from my cancer, after I went to a homeopath, a crystal-healer, a witch-doctor? All these are important questions, but are asked in a rhetorical flourish meant to indicate that the chances ‘were slim’ or ‘highly unlikely’, thus it must be the magic-man that heals, or your hidden psychic connection that provoked meeting your friend.
Consider the danger of ignoring proper calculations in medicine. People often tell us they go to a homeopath after going to a doctor; the doctor who is merely a puppet to ‘big-pharma’, who treats ‘me like a machine’ and so on. The medicines ‘Western’ doctors supply ‘do not work’, so people attend something more catchy, comforting and casual: the homeopath, the angel-healer, the witch-doctor. Strangely, one thing doctors can learn from these hucksters is the attention given to patients: the care, the pampering and the dignity conveyed. These all appear to play a factor, though people, like the great Barbara Ehrenreich, destroyer of all positive thinking, remain sceptical of how much attitude really affects health. If for no other reason than to keep patients, doctors could learn from these practitioners (they may be ‘practitioners’ but they are not medical ‘practitioners’). However, in the most important engagements of medicine, there is no time for pampering or it is simply inappropriate in an environment where, for example, the most important thing is to immunise a child.
Back to the patient: Firstly, what were the chances of you getting cured of your ailment anyway? Secondly, are we talking about a cold or a cancer? Is it absolutely impossible for cancers to suddenly go into remission without medical foresight? Of course not; oncologists will relate many stories where this has happened suddenly. The irony of course is that people imagine medical treatment as a coin-toss; you flip a coin once, the chances of getting tails are fifty-fifty. If you flip it again, the chances of getting tails remain fifty-fifty. The chances are ‘reset’ each time (this is different to asking how many tails I can get in a row, for example). But medical treatment does not ‘reset’ (similar to Ian Hacking’s Inverse Gambler’s Fallacy). Medical effects carry over.
People forget that medicine takes time to have an effect. When the effect happens to coincide with you drinking glorified water or smelling pretty aromas, many will point to homeopathy or aromatherapy as being the curing solution. But you might as well point to closing your car door or scratching your chin, since these might also have coincided with your body’s defence recognising the aid you had taken months or weeks ago. This false attribution to alternate stuffs gives them undeserved recognition and detracts from the things that actually cured you: even if it was not the medicine, we can safely say it was just your body! Health, though incredibly advanced, is still swathed in mystery but it does not mean we resort to made-up answers or whatever is convenient.
All these are factors made apparent when we put it into a proper context, asking for calculations and chances. Statistics is also wonderful since numbers do not discriminate, though obviously people may use them to do so.
The only thing remarkable about the strange world of ‘alternative medicine’ is the extent to which we allow ourselves to be duped, paying billions of our currencies into industries that consistently prove the power of the placebo. We are watching the pretensions of assertions squander our money. These fraudsters are using the Chasm of Credulity, the gap of isolated incidents, where the echoes of events removed from their context reside, leading to the fruition of bad thinking and anecdotal justifications. This same chasm across which people take leaps of faith and jump to conclusions.
The main reason scientists do not automatically trust anecdotal evidence is primarily because we need to put it into a context, test it, prod it, poke it. Anecdotes were and could be the first stirrings of something magnificent. But if the scientific eye turns toward the phenomena and it shrivels up and dies under scrutiny, it probably was not worth pursuing anyway. Someone’s clouds of hot air dissipate when cold reason enters the room.
For example: simply saying I felt better after being ‘touched’ by a magic man tells us nothing. Even if millions of people testify to the abilities of holy men and women, as they do in India with certain gurus, we need to obtain a context, the likelihood of their abilities occurring naturally (for example, did he really cure someone or was the patient’s disease likely to disappear anyway? What are the chances the storm clouds had been gathering for days and not summoned by a rain-dance?) Anecdotes are by definition after the fact, often not repeatable, and often, and most important, divorced of their context. Remember: what makes an event miraculous or supernatural is, more often than not, ignorance about the statistics of its occurrence within a specific context; as we saw with science and suicides, and sharing a birthday between random people.
To give you a further idea of this, consider this seemingly incredible find: Ben Goldacre relates a story from England in which ‘drinking the Queen’s Royal Deeside spring water improved arthritis symptoms in two-thirds of patients.’ Sounds remarkable until we put it into context, as Goldacre does: ‘It was a study of 34 patients over three months and there was no control group.’ To truly engage us, it should have much more patients, over a longer time and have a control group: i.e. a group that serves as a foil to the original and has similar characteristics as the experimental group but are given a placebo. To create a context for this remarkable find, we must offer a control to see whether it was truly the Queen’s Royal Deeside spring water or something else (if the control group gets similar results it does not mean the control was the cure, but that, more ldrinking the Queen’s Royal Deeside spring water improved arthritis symptoms in two-thirds of patientsikely, it was neither the experimental cure nor the control). Goldacre quips: ‘It’s hard to imagine an experiment where it would have been easier to come up with a convincing placebo [for a control group]. Water.’ Remember the birthday experiment: it sounds remarkable until we actually use statistics. Similarly, things become remarkable when we are unaware of the likelihood of, for example, arthritis being improved anyway due to the body’s own resistance.
Michael Shermer, in Scientific American, wrote: ‘thinking anecdotally comes naturally, whereas thinking scientifically does not.’ Because thinking scientifically is, most often, counter-intuitive to our ape minds: we are not computers or calculators. Would anyone think that there was above 50% chance that two people, in a random group of 30, share a birthday? Would anyone automatically think tiny things called bacteria and germs and viruses can cause untold misery and death, sometimes able to destroy entire civilisations?
No wonder for this latter we invoked gods since it seemed there was no other explanation: the irony being that both explain the death of crops but: which has been more useful, helped with preventative measures and so on?
We could say (1) sacrificing a virgin and letting her blood drain into the soil satisfied the gods resulting in our crops being restored or (2) we could point out that specific bacteria are infecting our plants and getting rid of these leads to restored crops. We face enormous problems if we use the first considering, for example, that not all virgins seem to pacify the gods. At the least, for simply practical, testable reasons – not to mention that crops have been restored despite no sacrifices over the years – the latter is more helpful and indeed more people realise as such for this simple, pragmatic reason. Yet we can’t escape the fact that both explain the same phenomenon. To explain is not to justify or even to reasonably justify. It is simply a story we tell to narrate our target events. Gods or bacteria, both result in the same thing. The duty of statistics and indeed of science can help disconnect the two showing that, whilst it is true both are explanations, only one survives objective testing so that even outsiders can ‘cure the appetites of the gods’.
I am reminded of Wittgenstein’s pertinent question: ‘Why did people think the sun went around the Earth?’ A reply given was: ‘Well, it just looks that way!’ Wittgenstein looked up at the sky and said: ‘But what does it look like when the Earth revolves around the Sun?’ Both heliocentrism and geocentrism arise from the same platform: looking up at the ‘movement’ of the Sun. We now know which is true (it’s heliocentrism in case you’re wondering).
Today, our societies face even worse submission before the altar of intuition, upon which bleeds all the evidence to the contrary.
The recent horror of anti-vaccination foolishness is a direct point that could be sharpened with an awareness of statistics. Shermer relates that there were a number of ‘parents who noticed that shortly after having their children vaccinated autistic symptoms began to appear.’ It was the beginning of the furore that would claim children’s lives, all because people believed anecdotal dogma above scientific reasoning. This was then compounded by the fraudulent blathering of Andrew Wakefield. Indeed, Wakefield is an excellent case-study for showing the power of statistics to empower us against charlatans like him.
Wakefield published an article in the prestigious Lancet journal, in 1998. It was more a speculative piece not warranting the media’s salacious transformation. In it, Wakefield reported 12 cases about his topic, the first stirrings of the link between autism and vaccination. Depending on the criteria, this is either remarkable or statistically negligible: 12 people sprouting wings or extra limbs from touching a wall warrants attention. Goldacre says: ‘For things as common as MMR and autism, finding 12 people with both is entirely unspectacular.’ This plugs the case back into context, stripping it of anything remarkable. Johann Hari agrees that the pool of test-subjects was too small: ‘It was based on a tiny pool of infants, most of whom were in the study because their parents believed in the link [between vaccines and autism] and wanted to sue for compensation.’
Wakefield then went on a media-rampage, publishing wherever he could to poke and prod the current medical procedures involved (specifically saying that the vaccinations should be separated by perhaps a year). Hari and Goldacre correctly condemn the media as being the main culprit in this saga of salaciousness, this epic of idiocy; giving an equal platform for health professionals and grieving parents, as if both had a basis for the scientific justifications. Here’s a clue: tears aren’t evidence. No doubt there is nothing worse than to lose a child; but when the danger of your own anger and hatred will lead to the death and suffering of other children, you deserve no compassion. I am looking at you, Jenny McCarthy.
Parents were given a tangible culprit in the form of the ‘Western medical establishment’ to blame for their impaired child instead of facing the facts of an indifferent universe, with no cosmic balance or care for us. Using no scientific facts, except sometimes Wakefield’s now completely discredited authority, mothers could invoke their own intuition to decide whether ‘stabbing their child three times’ was a good thing; they were encouraged to consult something as unfounded as psychics: their gut-feelings.
If we want a possible half-definition of science, perhaps it is this: whatever is counter-intuitive, perhaps most upsetting to our axiomatic assumptions, that thoroughly and clearly and elegantly explains the phenomena we are encountering. It’s not a perfect definition, but then, it’s not meant to be. Consulting your gut-feelings is precisely how not to do science using this sense of it: we are not so ‘made’ that consultation with our internal organs will lead to a proper explanation of the world; indeed, we will get the same results by consulting the innards of other animals, like cows or chickens. The point being, time and time again, science has shown the world to be other than we expected it to be. (Of course, there is the opposite, too, but we are not relating that for now.) Mothers being encouraged to consult an animal’s innards, whether their own person or a bovine’s, were being encouraged to be antithetical to evidence-based medicine, a history long and hard fought to combat a disease, and our greatest achievement as a species which continues to save millions of lives.
Here is how statistics killed Wakefield’s reputation. His findings in the Lancet from his tiny, biased pool of patients, were overshadowed by a later investigation into 1.8 million, randomly-chosen children, in Finland. They found nothing unstable when the children encountered the MMR-vaccine. Hari tells us:
Even more startlingly, it was found that when MMR was suspended in Japan due to production problems, autism rates held steady - but 90 extra children died of measles. This evidence was waved away by much of the press as difficult and indigestible; they preferred to focus instead on brain-dead trivia… (italics added)
The statistics tell us then that there is no fruitful or engaging relation between MMR and autism. The size and measurement completely undermines Wakefield’s biased nonsense. Of course, these weren’t the only tests but the sheer size indicates why, just from them, we can at the least be highly suspicious and, at best, completely dismissive of Mr Wakefield.
Wakefield was, however, not main the problem. It was the media’s coverage, their dismissal of important statistics it took me merely seconds to find. If you want the actual culprits, all you need to do is investigate. My point is this: through merely putting Wakefield’s findings into a proper context, we can see whether he is worth taking seriously or is biased and mistaken, if not lying. Like the assertion that drinking from the Queen’s pond heals arthritis, we can just increase the size, look wider and farther, investigate other explanations or ponder if there is an explanation at all worth pursuing. For example, the MMR and autism link was not worth pursuing and was better off not pursuing: considering that even one child died from not being immunised. That is one child too many. However, as one powerful website has indicated, we can (at time of writing) attribute 612 preventable deaths, in the US, to the furore and madness that was targeted at vaccines. And another number is close to it: 66, 515 preventable illnesses.
If we need any more reasons, I can provide them. A problem close to home, in the metaphorical and literal sense, came about through the public poli(dio)cy of Thabo Mbeki, in his ‘denial’ of a link between HIV and AIDS. There is much speculation whether he really thought so, but he very fervently thought of it as a colonial problem, instead of a medical one. So much so that anti-retrovirals were affected in their distribution because Mbeki denied the ‘Western’ science’s diagnosis of HIV/AIDS. Here is the great Raymond Tallis, quoted in full, from his brilliant Hippocratic Oaths:
Of the 70,000 children born annually to HIV-positive mothers in South Africa, about half could have been protected from becoming HIV-positive themselves, and suffering a painful, protracted death, with a single dose of a cheap anti-retroviral drug. Mbeki did what he could to stop this happening. Many of the 800,000 non-infant deaths a year from Aids could also be prevented by making antiretroviral drugs available, but Mbeki’s ideological views did not permit it. According to a recent study (suppressed by the South African Government, which maintains that anti-HIV drugs are toxic and will primarily benefit pharmaceutical companies) immediate provision of such drugs could save up to 1.7 million people by 2010. As one of his former supporters, the Anglican Archbishop of Cape Town, the Most Rev Njongonkulu Ndungane, has said, Mbeki's Aids policies are as serious a crime as apartheid — and have already killed many more people.
Mbeki was and indeed is aware of the statistics, which highlights another problem: statistics can be ignored. But then, so can the preventable deaths of infants who die as a result of your bigoted delusions. Ignorance, like a flood, does not discriminate in what it sweeps away.
Empowering ourselves with numbers might seem strange, until we recall how statistics can destroy the pretentions of charlatans or miraculous happenings. Indifferent in itself, statistics displays information anyone is welcome to assess. You would be hard-pressed in defending Wakefield’s tiny Lancet study, with twelve children, over the thorough Finnish one, with 1.8 million. However, numbers are not the end: control-groups, double-blind mechanisms and sensitivity to scientific reasoning that comes with studying statistics are also necessary. In many instances of outrage, like the anti-vaccination uproar or Mbeki’s idiocy, we can reasonably assume that their assertions have no backing with regard to control-groups, alternative hypotheses, and so on. It is invariably the ape-man bursting out the lab coat, to pound his chest, beating out the rhythm of his own bias and delusion.
December 14, 2009
Look Who's Talking: The Turing Test's 3,000 Year History - And My Proposed Modification
In his famous experiment, Alan Turing pictured somebody talking with another person and a computer, both of which are out of sight. If they're unable to tell the computer from the human being, the machine has passed the "Turing Test." But here's a question for a human or a machine to answer: Why did Turing pick speech as his proof?
The Test is usually described as way to determine whether a computer has achieved consciousness, but Turing's original framing was more subtle. "I believe (the question of whether machines can think) to be too meaningless to deserve discussion," he wrote. "Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Now, that's interesting: Not only did Turing choose good conversation as a valid substitute for proof of machine "thought," but he then added an implied proof - based on what people say. If people say machines "think," then they do think. If people say they're conscious, then they are conscious.
Why such an emphasis on speech - the machine's, and our own? The idea that language, words, and names are a measurement of consciousness goes back at least 3,000 years, to the Tower of Babel story from the Book of Genesis. "And the whole earth was of one language, and of one speech," it says, "and they said ... let us build us a city and a tower ... and let us make us a name." You know what happens next: "And the Lord said, Behold, the people is one ... now nothing will be restrained from them, which they have imagined to do." The great tower, that literal Hive Mind with its worldwide common language (HTML?), came crashing down. The lesson? Language and knowledge equal personhood, but too much equals Godhood.
People could create artificial life in the ancient texts, too - but their creations couldn't speak. In the Talmud, Rabbah makes an artificial man that looks just like the real thing, but a shrewd scholar - one Zera, who I picture as looking like Peter Falk in Columbo - administers a Turing Test and the creature flunks: "Zera spoke to him, but received no answer. Thereupon he said unto him: 'Thou art a creature of the magicians. Return to thy dust.'"Flash forward to the 1600's and Descartes, who wrote in Discourses On the Method: "If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others."
I don't know Descartes if read the Talmud, but he claimed to be religious and even wrote an ontological argument for the existence of God (if not a very convincing one). There's no question he read Genesis, as well as many other papers, poems, and stories derived from these ancient texts and legends.
Did Turing read Descartes? We don't know - but we can be pretty sure he saw another work: Boris Karloff's Frankenstein. The monster, who was eloquent in Mary Shelley's book, was mute in the movie. Whether or not the film makers were echoing these ancient stories, they'd undoubtedly seen the 1920 German film The Golem (see above), based on a folktale derived from the Talmud passage about the wordless "man" made of dust. The Golem story spread in the shtetls of Eastern Europe during the 18th Century at the same time the Frankenstein story was written. They may both have stemmed from the same fear - that humanity's industrial advances were bringing us to a new Babel even as new medical discoveries invaded God's turf.
I'm not a big fan of the Turing Test (which is analyzed in detail here). I'm sympathetic to the Chinese Room argument that you can replicate speech without creating the sentience behind it. I lean toward the idea that most speech is just an output for the human species, the way honey is for wasps or webs are for spiders. My first mother-in-law could weave something that looked like a spiderweb, if you asked her nicely, but that didn't make her an arachnid. So if we build an AI - or meet an alien, for that matter - that can speak like a human being, I still won't be completely convinced it has consciousness like ours.
Which gets us to singing. Its main evolutionary purpose seems to be attraction - either sexually, or as a way of establishing trust. Daniel Levitan suggests that singing might have been used to convey honesty when a stranger approached a new community, because the emotion conveyed is more difficult to fake. Maybe that's why Bob Dylan's more popular than Michael Bolton: It's easier to lie with words than music, and the successful transmission of emotion is more important to us than the sweetness of the voice.
So I hereby propose a modification to Turing's test: Instead of asking our entity to speak, let's ask it to sing. If it can make us cry with a sad song, we'll say that it's conscious. And if it can get us aroused - with, say, a new version of "Sexual Healing" - well, then let's just say our experiment could take an unexpected turn.
It's true that all of the arguments against the Turing Test could also be used against this one, so it doesn't really advance the debate very far. But what the hell: At least we might hear a decent song for a change, instead of all the crap they've been playing lately.
March 02, 2009
A Scientist Goes to an Ashram for a Personal Retreat – Part 2
Part 1 of "A Scientist Goes to an Ashram for a Personal Retreat" can be found here:
(Note: I do not use the real names of people, nor do I identify the specific Ashram. I changed a few details. The purpose is to protect the privacy of the individuals. Readers who are familiar with this Ashram will probably recognize it.)
I Make Contact
My first few days at the Ashram were filled with a good deal of uncertainty. Where do I sit in the dining hall? Will I violate some standard of etiquette among people pursuing a serious religious practice? What if I say hello to someone who is spending time in silence? I know I'm going to get a stern look if I upset someone's spiritual practice. My predilection is to do nothing, say nothing, and hope I do not trip over my own feet with a monastic faux pas.
The first evening I walked up to the building that housed the dining hall to make sure I was there at the start of the dinner period. The building is like a visitor center, with a small shop selling books, CDs, DVDs, gifts, and items of religious significance. It also houses the media center. I looked in through the door to the dining area and into a large common area. It's very much like a multipurpose room in a small high school: auditorium, lecture stage, gym, and dining. There was a decent size commercial kitchen , off to one side. Tables were set up for a buffet service. Tables and chairs were arranged around the auditorium. There was a sound proof control room in a corner opposite the stage, and was part of the media center. I could see an access to a patio for eating outside. This is January, so we stay inside. I walked over to the food and toured around the two buffet tables. I was alone and didn't know if I should begin eating or not. I returned to the hall outside the dining area. There were a few people there but no one seemed to organizing themselves for dinner. I went back into the dining room and saw a lone gentleman filling up a plate. I started doing the same. Then it happened. I made my first breach of monastic etiquette. The gentleman politely told me I had to wait for the gong to be sounded, enter with the others, and wait again for a communal prayer to begin the mealtime. He had to be elsewhere and was taking a plate of food so he could make his other appointment.
OK, that wasn't too embarrassing. After a few more minutes about a dozen or two people gathered. An aproned cook opened the door, and sounded a small hand held gong. We filed in and stood together around the food. Someone started a Sanskrit prayer that was sung by everyone. The feeling they projected was communal, happy, relaxed. and enjoying their prayer as a prelude to eating. I was feeling more comfortable. With the end of the singing, the group recited a prayer, in English, the words being in a large framed poster on the wall. Eventually, I learned to follow and recite the prayer, along with a shout of “Ji!” in response to another incantation. It was like an affirmation, an “Amen” if you will, that ended the prayers and gave everyone permission to “dig in.” I was pleasantly surprised at the variety and presentation of the vegan food. In addition to recognizable salad items like greens, tomatoes, broccoli, cauliflower, and carrots, there were all sorts of middle eastern and Indian dishes. Of course there was lots of tofu cooked this way and that way. It all looked very good and it was great tasting, as well.
Not knowing where to sit, I went to a table further out from the food, facing back toward the food and the other diners, and started to enjoy my dinner. I was recognized for what I was, a brand new visitor who didn't know up from down. A woman monastic, Swami Learananda, came over and invited me to sit with her and several others. I met a couple of monastics and visitors like myself. The visitors tended to be friends of the Ashram who come periodically for the spiritual practice and experience. A few were newbies like myself who were referred by others. Swami Learananda said I looked familiar and that we met here before. I told her she looked familiar, and that I met her more than fifteen years before when visiting Giri and Yukteswar. “Of course,” she said. Learananda was wonderful to talk to and made me feel comfortable, relaxed, and very much at ease. She was raving about the homemade bread and organic homemade jam, so I had to try it. It was wonderful. For a few moments I was considering applying for life long study as a Swami-in-training just for daily access to that homemade bread and jam. Although I enjoyed every bit of the plentiful food, I was afraid I would be very hungry between meals. At home I'm frequently hungry between meals, and tend to nosh a lot. Never, not once, did I feel hungry between meals at the Ashram.
I went back to my room and studied the schedule for the week. There was guided meditation at 5:30 AM and 'open' meditation at 6:30 AM. Guided meditation didn't appeal to me for two reasons. It was too early in the morning, and I was sure I would be a distraction to the 'guide' and the more serious participants. I didn't want to make another monastic faux pas. Besides, a whole hour of meditation might be more than I can handle. So, 6:30 it is. I didn't have any trouble getting to sleep. I was exhausted from the trip. My room was large, with a small kitchenette, and the bed very, very comfortable. For the entire week I never made any early morning meditation. I must have been in need of a lot of sleep because I slept late every single morning. That was OK, though, because one of my reasons for coming here was to get some rest and quiet. I wasn't hassled by anyone to adhere to a schedule. I was treated like the guest I was and I could set my own schedule. Usually, a visitor will ask for a spiritual guide and be assigned to one of the senior staff. I didn't ask. Instead I chose to follow my nose and spend time with different members of the Ashram.
On the second day I went to the visitors office to announce my presence and pick up information about activities, schedules, a few rules, and a map of the property. The head of the visitors office is an 80 year old woman monastic, Swami Mataji. Mataji is not her Sanskrit name. It is an honorific, a term of respect and affection, and reserved for older women. As the senior woman monastic, everyone calls her Mataji. I recognized Mataji from my earlier visit and she recognized me. I can understand that I would remember some of them , but how they would recognize me is incredible. Mataji is a former Catholic nun who found her spiritual center at this Ashram, many years ago. She was smiling, energetic, and nice to be around when I first met her. More than 15 years later, nothing has changed. Mataji was also a reader during the silent period of lunch, each day. The first 45 minutes of lunch was the only compulsory time of silence for visitors at the Ashram. She would read from varied texts, and from writings of the founder. This is not unlike practices at Christian monasteries, to this day. She had a wonderful voice that I could listed to, all day.
Two Lessons in Humility
On my second after noon, I ran into Amarananda outside the media center where she worked. I thanked her again for the drive from the Amtrak station. Then I told her that I had 3 CDs of Tibetan ritual chants on my laptop. I had the .mp3 files and would like to give them to her. I told her I knew she would love them. We made arrangements for me to transfer the files to her home computer after the workday. Later, she drove me to her house, less than a mile away, and let me at her computer. I build and maintain my own computers and my own network. For friends and family I am the de facto 24/7 in-home IT service technician. For the record, Linux rules and Windows (especially Vista) sucks. As usual, nothing goes right the first time and I had to return to my room to download other software, drivers, codecs, etc. I think I finished the next day and manage to debug and fix a couple of problems with her computer. Some people fly remote control helicopters. I play with computers. She loved the Tibetan Buddhist ritual chants, just as I knew she would. Amarananda was very appreciative, I was happy, and we settled down to some tea and resumed our conversation about Buddhism, the Ashram, and a host of other related topics.
At one point she asked me if I believed in reincarnation. I said no, but that I am always open to listen to views of other people and traditions. I won't go into the details, but it was a very central belief for her, and gave a great deal of meaning to her life. This is true for all Buddhists. Then I thought I would be very clever and ask her a question that would demonstrate how much I understood about Buddhism. I asked, “How does belief in reincarnation inform you as to what constitutes a good life?” “What does reincarnation tell you about why you should lead a good life?” Of course, I knew the correct answer. Scientists are oriented to finding the correct answer. Within the span of about 20 seconds, the time she took to give me an answer, I realized I didn't know diddly squat about Buddhism, and I wasn't that clever, after all. My correct answer about being good in this life so you don't come back in the next life as a 3-toed sloth, was totally irrelevant. I decided I had to drop all pretenses, do a lot of listening, and take the opportunity to learn as much as I could.
That evening I sent a email to a few friends and told them about my trip, some of the people I met, the food, uploading my Buddhist chants for Amarananda, and my Karma Yoga. Karma Yoga is the notion of doing good as a consistent practice, so as to help you 'earn' successive steps in the cycles of death and rebirth as your strive for ultimate enlightenment. My Karma Yoga was volunteering for kitchen clean up after dinner that night. The last time I did kitchen duty was in a monastery when I was much younger. I hated every minute of it. The worst job was scrubbing pots and pans. At the Ashram, however, I enjoyed every minute of it. Once I finished inside the kitchen, I cleaned all the tables and chairs. The food service manager came out to tell me the tables and chairs had never been this clean. In my email to my friends I was trying to be clever, again, telling them that on the great Karma scorecard in the universe, I added three points to my plus column. The next morning I got near identical responses from two friends. They went something like this: “Norm, we know you better than you know yourself. Stop adding up points on your score and LET GO!” They were right. Two lessons in humility in two days was sufficient. I decided to let go. This is not a place for a quantitative scientist who is trying to impress people with cleverness, and wit. I thanked my friends for telling me what I needed to know. I thanked Amarananda, as well.
The next day I met Swami Raneananda, another woman monastic, outside the dining hall. We struck up a conversation and she said she recognized me from before. I didn't remember her. This is really uncanny. Raneananda has an incredible sense of humor. We hit it off the first day. We would tell each other real life humorous stories. It got to where we agreed to take turns. I couldn't get another story out of her until I told her one of my own. Here's one of Raneananda's stories. Some time ago, she met another woman visiting the Ashram who was an avid skier. The other woman was an active practicing Catholic who went to Montreal to see the Polish Cardinal Karol Wojtyla, later to be Pope John Paul II, at a conference. The woman approached the Cardinal, while everyone else was standing back and reserved. She introduced herself and said that she and the Cardinal had something in common. “What's that?” he asked. She said that both of them loved skiing. She asked, “How many Polish Cardinals like to ski?” He replied, “Half. The other one doesn't ski.”
My Visits to the Shrines
One afternoon Raneananda and I were talking about the religious meaning of the concept of Baptism. For her, the meaning was very simple as it was very clear. It was a complete cleansing of the past, and a new beginning to try to lead a good life. This was quite different from some traditions, like Christianity, in which it is a one-time event to remove an otherwise indelible scar, and initiate someone into a faith community. For her, the idea of Baptism could be a recurring ritual. Leading a good life may have many setbacks. Putting everything about a hurtful, unkind, and damaging life behind with a single ritual of cleansing, with a commitment to start anew, is a very powerful idea. In my view, it does not require a belief in a personal God, nor an expectation of reward in an afterlife. For many people and traditions it does require these. Raneananda loaned me a copy of a Hindu commentary on the teachings of Jesus, as written in the Christian testament. There was a discourse on the baptism of Jesus by John the Baptist. After arriving home, I bought the entire two-volume commentary. For believer or nonbeliever, for anyone who is interested in comparative religions, or the study of religion as a natural phenomenon, it's a fascinating one-of-a-kind source. Raneananda's comments sounded a lot like the experiences of people who have multiple attempts at sobriety through Alcoholics Anonymous.
That afternoon Raneananda had trash pickup duty for the monastic residence buildings. Then she had to staff the gift shop at the Shrine, about a mile away. I offered to help her with garbage pick up and disposal. So we hopped into the Jeep Cherokee and I helped bag the garbage and then throw it on top of the roof rack, and she drove a short way to the dumpsters. I accepted an offer to drive with her to the Shrines. Her duties included opening and closing the gift shop, the founder's shrine and burial vault, and the main gates and entrance to the main temple. I could also ride back with her, when she closed, on a road that is all uphill for a mile and a half. The main temple is on a plain that borders a river. The rear of the temple looks out over the river. The temple faces a steep hill. The temple has a long entrance promenade and is set back from the main gate. It looks a little like the approach to the Taj Mahal, on a smaller scale. Outside the front gate and promenade entrance is the gift shop and an exhibit/education hall. Looking up from the temple front, the founder's shrine is about one third the way up the steep hill. At the very top of the hill, and not accessible from the temple, is a shrine to Siva. His dance of the spheres keeps our universe working the way it ought to be working. In the western tradition it was the clockwork music of the spheres.
Raneananda dropped me off at the founder's shrine. It was about the size of a deep three-car garage. There was a floor level area in front of an altar-like structure. As seen in many Chtistian churches, there are a few steps up to the altar level. On the altar was a full size wax statue of the founder. He is adorned with silks and sashes, and flowers. On the altar level is a gentleman, probably in his fifties, in deep meditation. He is sitting cross legged and his spine is bent forward. My initial reaction was disappointment in looking at a realistic image of the founder. Personally, I don't like the idea of, or the hint at, any kind of idolatry. My view is that words, and teachings, and meanings, and the spirit of the message should be the focus of meditation and contemplation. I didn't want to go up to the altar so I took a couple of cushions and sat on the floor. After a short time my back was killing me. I got up at sat on one the chairs against the wall and window that was opposite the altar. Thank God I never went to the early morning meditations because I never would have made it all the way through. After a short while I found myself in a very relaxed, deep, meditative state. I was present, I was conscious, I was not asleep, but I was in a state of … (There is so description of the state). This was not a 'religious experience', nor was it transcendent. I was in NOW. I was in PEACE. I stayed in this state for at least half an hour or more.
Eventually, I got up, returned the pillow cushions, put on my shoes, and walked down to the mail temple. I waved to Raneananda as I walked through the main gate and proceeded down the tree-lined promenade. I entered on the bottom level of the temple. The bottom level is a 360 degree exhibit of the major religions of the world. The exhibits are behind glass, against the outer wall of this large circular space. There are ten major, named religions with their own informative exhibit: Judaism, Christianity, Islam, Hindu, Sikh, Native American, Shinto, Buddhism, African, and Taoism. There are two other exhibits: One for all other named and unnamed religions; and, One for all secular religions. These include science, humanism, capitalism, atheism, etc. The place has a definite ecumenical and inclusive spirit. The upper level is a place of meditation, with subdued lighting and a silence that is deafening. When I left the shrine I thought I would demonstrate some sign of respect. I don't like the idea of bowing to anyone. If I were to meet the Queen of England I would not bow or kneel or whatever. Not a small part of that is being a U.S. citizen, and we don't bow or kneel before anyone, especially a throne we cast off. Then I thought, what would I do if I were a guest in someone's home and it was time to leave? What might be an appropriate way to show respect to your host? I know. I would shake hands. So I decided to shake hands. The form of the respectful handshaking was putting my two hands together, holding them close to my chest, fingers pointed up, and give a bow. I did this when traveling in Thailand. Raneananda closed up the Shrines and the main gates, we got in the Jeep, and she drove us back to the main complex and to dinner.
What Am I Doing Here, and What Does This All Mean?
How does all of this validate the views of someone who does not believe in a personal God, but who has a strong sense of being one with the universe and possibly losing a sense of self in the experience. The transcendent and the numinous can be accessible to the most materialistic of scientists, without positing the supernatural. At the same time, there is no reason to mistrust the same experiences in believers simply because they posit a supernatural source. The question is not, “Does God exist?” It's irrelevant. The question is whether believers and nonbelievers can rejoice in the same experiences and not denigrate the other's explanation as to the origins of very powerful human responses.
Bringing all of this into a coherent, complete narrative will take one more, final part to the story.
January 12, 2009
Understanding Arthur Alexander
Nothing kills the enjoyment of music for some people faster than trying to analyze it. But I’m obsessed with solving the mystery of Arthur Alexander. His body of work is small. His songs are musically and lyrically simple, even simplistic. Almost nobody but the most dedicated music lovers remember his name today. Yet he was the only songwriter to win pop music’s Triple Crown: His songs have been covered by the Beatles, the Rolling Stones, and Bob Dylan, arguably the three most respected songwriting acts in rock and roll history. Dusty Springfield, Ry Cooder, Roger McGuinn, and dozens of others1sang them too.
I’ve been wondering about these tunes for 45 years now, since I was ten years old. Maybe I’m getting closer to understanding them, But I’m not there yet. After all, his chord progressions were basic. His lyrics seem banal on paper: “Every day I have to cry some/wipe the water from my eyes some.” “Oh my name is Johnny Heartbreak …” “Me and Frank were the best of friends …” But by at least one objective measure – the artists who covered him – he was the greatest rock songwriter who ever lived. Subjectively, his best songs are impossible for me to resist as a listener and indescribably rewarding to sing.
So who the hell was this guy, and what made him so good?
He had a brush with R&B stardom as a singer, but really made his name as a songwriter in the 60’s. Yet even after the Beatles and Stones covered him he had trouble collecting royalties. He lived out the next 25 years as a bus driver, interrupted only by one small hit in the 70’s. Then he then enjoyed a brief comeback in 19932 before dying suddenly.
I was first introduced to Alexander, like many of my generation, by the Beatles’ cover of "Anna." That track is a great reminder that, before he went on his odyssey from musician to activist to martyr to Apple icon, John Lennon was one of the great rock and roll singers. Alexander’s songs lean to melodrama, and Lennon milks this one for all it’s got. Alexander’s simple vocal patterns leave singers a lot of room to fill the space, and Lennon's able to pull out tricks Alexander hinted at in his original recording, like the Buddy Holly-ish pseudo-yodels that punctuate the bridge (“oh-oh-oh-oh …”)
That’s one of Arthur Alexander’s secrets: His lean song structures make them a pleasure to sing. And his recordings provide suggestions rather than instructions. Where other writers fill every measure with musical and lyrical acrobatics, Alexander’s are spare frames singers can hang their hearts on. Emotionally, each song has a story arc. If you wrote songs using the Syd Field screenwriting method they’d turn out a lot like Alexander’s. They’re three-minute mini-operas full of conflict and resolution. Take “You Better Move On,” which the Rolling Stones covered in 1964: A poor boy’s talking to his wealthier rival, and he humbly admits he can never give his love the good things he wants her to have. But then he turns on his competitor … “I’ll never let her go,” he says, "I love so." Then the air fills with tension. “I think you better go now,” he says quietly, “I’m getting mighty mad.” Soft-spokenness can be more menacing than a raised voice, and Arthur Alexander knew that. Sound corny? Lame? Yeah, maybe. But listen to this cover by Mr. Ironic Distance himself, Randy Newman (before Newman launches into his own “It’s Money That Matters” ): There’s no distancing in Newman’s performance or Mark Knopfler's accompaniment, no sense of anything but the drama in each moment. That’s the best thing about Arthur Alexander’s songs: They’re irony-proof.
That’s one of Arthur Alexander’s secrets: His lean song structures make them a pleasure to sing. And his recordings provide suggestions rather than instructions. Where other writers fill every measure with musical and lyrical acrobatics, Alexander’s are spare frames singers can hang their hearts on.
Emotionally, each song has a story arc. If you wrote songs using the Syd Field screenwriting method they’d turn out a lot like Alexander’s. They’re three-minute mini-operas full of conflict and resolution. Take “You Better Move On,” which the Rolling Stones covered in 1964: A poor boy’s talking to his wealthier rival, and he humbly admits he can never give his love the good things he wants her to have. But then he turns on his competitor … “I’ll never let her go,” he says, "I love so." Then the air fills with tension. “I think you better go now,” he says quietly, “I’m getting mighty mad.” Soft-spokenness can be more menacing than a raised voice, and Arthur Alexander knew that. Sound corny? Lame? Yeah, maybe. But listen to this cover by Mr. Ironic Distance himself, Randy Newman (before Newman launches into his own “It’s Money That Matters” ):
Sound corny? Lame? Yeah, maybe. But listen to this cover by Mr. Ironic Distance himself, Randy Newman (before Newman launches into his own “It’s Money That Matters” ):
There’s no distancing in Newman’s performance or Mark Knopfler's accompaniment, no sense of anything but the drama in each moment. That’s the best thing about Arthur Alexander’s songs: They’re irony-proof.
The best AA songs underscore their emotional shifts by staying in a pretty narrow melodic range on the verses to build tension, then going much higher on the bridge to increase emotion, and finally going back to the original melody but in a resolved emotional state. Alexander probably picked up some of these tricks by singing country music. Singing open-hearted C&W tunes like “I Wonder Where You Are Tonight” probably gave him a feel for these techniques.
But that’s still not the whole story. What’s missing?
Manfred Clynes might have a clue, but his research is controversial. Clynes, a classical pianist turned research scientist, believes that musicians who play a composer’s music – even in their heads – reproduce a distinct biological pattern for each composer. Not for each piece - for each composer. He goes so far as to say of Rudolf Serkin, one of his test subjects: “We asked him to think Beethoven, and he would think Mozart. But we could tell by looking at the printout. So he cooperated, and we got the same shapes. That was probably the most exciting moment of my life."
Is that it? Is there a neurological “Arthur Alexander signature,” common to all of his work? Or is it something else? But Alexander has his share of weak tunes, too, ones that don’t convey the same power. Where is his signature in songs like “Genie in the Jug”? (As an aside, I went to school with Manfred Clynes’ kids. I performed in San Francisco's Coffee Gallery in North Beach with his son Darius in 1971 or so - along with past and future luminaries like Wavy Gravy, Peter Case, and the notorious and flirtatious drag queen who called herself “George.”)
Daniel Levitan’s book The World In Six Songs suggests that one evolutionary role music has played is to convey emotion more accurately than speech. That could be useful, for example, in convincing a competing tribe that you’re sincere about peace. Says researcher Ian Cross: “… let’s imagine the possibility of access to a parallel system of affiliation, unity, bonding. And … one that conveys an honest signal - a window into the true emotional and motivational state of the communicator.”
Whew. That’s a lot of academic-sounding verbiage to quote about the guy who wrote “the rain falls around me/loneliness has finally found me/and I’m in the middle of it all.” But we might be on to something now: sincerity. Arthur Alexander’s songs come, open-handed and seeking peace, like an emissary from the other side. I trust their emotion. I have since I was a little boy, and I will until I die. He couldn’t structure a melody like Stevie Wonder, or write a lyric like Bob Dylan. But his songs made me trust him. They made me trust the person singing. They made me trust the song.
Forget all the analysis: They made me want to sing.
1The Internet’s filled with claims that Elvis Presley and the Who also covered Alexander, but that’s wrong. As far as I can tell they covered songs that Alexander sang but didn’t write. You just can't trust that Internet ...
2A collection of Arthur Alexander tracks recorded around this time, Lonely Just Like Me (Halftone), is one of the best introductions to his work.
December 07, 2005
Good Sleep, Good Friends, Good Health
Seniors don't need to do everything the health magazines recommend to stay fit. A new study with older women shows that either snoozing right or maintaining a good social network is enough to reduce levels of an inflammatory compound linked to bad health.
It's well known that lifestyle characteristics such as sleep and relationships can affect health. For example, seniors who sleep badly or have few close friends and relations generally have more health problems and die younger than their peers. But what's behind the trend? Previous research indicates than an inflammatory molecule in the body called IL-6 is present at high levels in people who sleep badly. Just as high cholesterol puts one at risk for heart disease, high IL-6 increases the risk of a variety of ailments associated with age, such as heart disease, Alzheimer's, and arthritis.
December 06, 2005
Lack of "Mirror Neurons" May Help Explain Autism
More than one in 500 children have some form of autism, according to the Centers for Disease Control. All autistic children suffer from an impaired ability to communicate and relate to others, but some of them are able to socially interact to a greater degree than their peers. A recent study of a group of these so-called high functioning autistics suggests the neurological basis for their social impairment.
Neuroscientist Mirella Dapretto of the University of California Los Angeles and her colleagues surveyed the brains of 10 autistic children and an equal number of nonautistic children as they watched and imitated 80 different faces displaying either anger, fear, happiness, sadness or no emotion. By measuring the amount of blood flowing to certain regions of the children's brains with a magnetic resonance imaging (MRI) machine, the researchers could determine what parts of the brain were being used as the subjects completed the tasks. The autistic children differed from their peers in only one respect: each showed reduced activity in the pars opercularis of the inferior frontal gyrus--a brain region located near the temple.
Better Bananas, Nicer Mosquitoes
SEATTLE - Addressing 275 of the world's most brilliant scientists, Bill Gates cracked a joke: "I've been applying my imagination to the synergies of this," he said. "We could have sorghum that cures latent tuberculosis. We could have mosquitoes that spread vitamin A. And most important, we could have bananas that never need to be kept cold." They laughed. Perhaps that was to be expected when the world's richest man, who had just promised them $450 million, was delivering a punchline. But it was also germane, because they were gathered to celebrate some of the oddest-sounding projects in the history of science.
December 03, 2005
Bees Recognize Human Faces
Think all bees look alike? Well we don't all look alike to them, according to a new study that shows honeybees, who have 0.01% of the neurons that humans do, can recognize and remember individual human faces. For humans, identifying faces is critical to functioning in everyday life. But can animals also tell one face from another? Knowing honeybees' unusual propensity for distinguishing between different flowers, visual scientist Adrian Dyer of Cambridge University in Cambridge, England, wondered whether that talent stretched to other contexts. So he and his colleagues pinned photographs of four different people's faces onto a board. By rewarding the bees with a sucrose solution, the team repeatedly coaxed the insects to buzz up to a target face, sometimes varying its location.
Even when the reward was taken away, the bees continued to approach the target face accurately up to 90% of the time, the team reports in the 2 December Journal of Experimental Biology. And in the bees' brains, the memories stuck: The insects could pick out the target face even two days after being trained.
Can science survive George Bush?
From The London Times:
SCIENTISTS ARE, by and large, left-wing creatures. They opposed the Bomb. They generally oppose the destruction of habitats, which aligns them with the green movement. They have, broadly, chosen not to look at whether we are born geniuses or dunces, hippies or murderers; the spectre of genetic determinism conflicts with the cherished liberal notion that we, with the help of parents and society, shape our talents, opportunities and destinies. They believe that scientific research should be conducted for the sake of truth and the benefit of society, rather than to line the pockets of shareholders; this makes them enemies of big business. They tend to believe in evolution, which puts them at odds with the pious. They aspire above all else to objectivity, impartiality and accuracy, and they respect the power of science to overturn old orthodoxies.
Now consider this: public policy on such topics as climate change and stem-cell research requires a scientific input. In America, public policy is moulded by a conservative, industry-friendly, Christian-sympathising Republican Government. The result, Chris Mooney documents in The Republican War on Science, has been an almighty intellectual clash between scientists and politicians. Despite the sometimes crudely partisan line, he weaves a pretty convincing tapestry.
December 01, 2005
Mental illness link to art and sex
From The Guardian:
From Lord Byron to Dylan Thomas and beyond, the famous philanderers of the art world may have had a touch of mental illness to thank for their behaviour, psychologists report today. A survey comparing mental health and the number of sexual partners among the general population, artists and schizophrenics found that artists are more likely to share key behavioural traits with schizophrenics, and that they have on average twice as many sexual partners as the rest of the population.
Schizophrenia is so debilitating that those with the condition are often socially isolated, have trouble maintaining relationships and so reproduce at a much lower rate than the general population. But cases of schizophrenia remain high, at around 1% of the population. "On the face of it, Darwinism would suggest that the genes predisposing to schizophrenia would eventually disappear from the gene pool," said Dr Nettle.
Einstein-a-thon on the Web
The World Year of Physics goes into its final month with a Big Bang — a 12-hour marathon Webcast on Thursday that hops from Geneva to Egypt, from Jerusalem to Venice, from London to the South Pole. From 6 a.m. to 6 p.m. ET, physicists and educators will hold forth on time travel and neutrinos, the legacy of Albert Einstein's theories and the puzzles yet to be solved. And along the way, even MSNBC.com will come in for a little of relativity's reflected glory. Our interactive presentation on "Putting Einstein to the Test" is one of the winners in the Pirelli Relativity Challenge for the best multimedia presentations explaining special relativity. The contest, which is presenting its awards at the Telecom Future Center in Venice on Thursday, drew about 250 entries from 40 countries.
Cell & Membrane Biology
Sealed membrane systems are a defining feature of cellular life. They provide a barrier between the cell and its external environment and, in eukaryotes, divide the interior of the cell into functionally distinct compartments. Membrane proteins comprise around a third of gene products in most organisms and research is being revolutionised by the structural analysis of increasingly complex macromolecular systems.
A flavour of the current excitement in cell and membrane biology can be obtained in the research articles and reviews presented in this Nature web focus.
November 30, 2005
Ganging Up on the Girls
It seems that 9-year-old boys aren't the only male creatures who will join together to torment their female counterparts. When male lizards largely outnumber females, they direct their aggressiveness toward mating partners, population biologists report. Such belligerence, they say, could put lizard populations at risk of extinction.
Lizards were separated into two populations, each with about 70 members. In one population the adults were three-quarter males, and in the other they were three-quarter females. Lizards were allowed to emigrate to another population of the same bias in sex ratio. The mortality and emigration rates of male lizards were unaffected by sex ratio imbalances, the team reports online this week in Proceedings of the National Academy of Sciences. But females were 2 to 3 times more likely to die or be wounded by males when their environment was male-dominated than when it was female-dominated. The team concluded that rather than fighting off male competitors, the too-numerous male lizards forced the females into mating.
November 29, 2005
Everyone’s eyes are wired differently
The first images ever made of retinas in living people reveal surprising variation from one person to the next. Yet somehow our perceptions don't vary as might be expected. As they took pictures of the thousands of cells responsible for detecting color in the deepest layer of the eye, scientists found that our eyes are wired differently. Yet we all — with the exception of the colorblind — identify colors similarly.
The results suggest that the brain plays an even more significant role than thought in deciding what we see.
Does Stress Cause Cancer?
Christina Koenig found out she had breast cancer on a Friday afternoon. She was just 39 years old. On Monday, she thought she knew why the cancer had struck. "I went in and talked to a team of medical professionals who ultimately performed a lumpectomy, and I said, 'How long has this been there?' They said, 'Five to ten years.' And immediately, my mind jumped to: 'Well, I did go through a divorce. I did have stress.' " Ms. Koenig, who lives in Chicago, was divorced four years before her cancer was diagnosed. Was it just a coincidence, she wondered? Now, four years later, she still wonders. So do many other women who get breast cancer. Ms. Koenig now works for Y-ME National Breast Cancer Organization, which gets 40,000 calls a year on its hot line. Over and over, she says, women ask, Did stress cause their cancer by weakening their immune system and allowing a tumor to grow? "It's a widespread belief," Ms. Koenig said.
And it is not restricted to women with breast cancer.
November 27, 2005
Waking up to how we sleep and dream
We spend about a third of our lives asleep. What really goes on during this time? The answer: more than anyone ever dreamed. This research is based on well-established findings that the brain doesn't stop working when we sleep. During as much as 20 percent of our sleeping time, we exhibit rapid bursts of eye movements, and our brains are almost as active as when we are awake. Called REM (rapid eye movement) sleep, these are periods of vivid dreaming. During the rest of our sleep, even though consciousness is greatly diminished, our brain cells remain surprisingly active.
"Studies show that hallucinatory mental content is lowest during active waking and highest during REM sleep," says Allan Hobson, a professor of psychiatry at Harvard Medical School. "The incidence of thinking is highest during quiet waking and lowest during REM sleep. The implication of these findings is that the sleeping brain can either generate its own perceptions or it can think about them. It cannot do both at the same time. Therefore, dreaming is as hallucinatory and thoughtless (delusional) as so-called mental illness."
Think of that next time you try to make sense out of your dreams.
November 26, 2005
the evolution of venom
Carl Zimmer writes about venom and the origin of snakes:
"Back in February I discovered the remarkable work of Australian biologist Bryan Grieg Fry , who has been tracing the evolution of venom. As I wrote in the New York Times, he searched the genomes of snakes for venom genes. He discovered that even non-venomous snakes produce venom. By drawing an evolutionary tree of the venom genes, Fry showed that the common ancestor of living snakes had several kinds of venom, which had evolved through accidental "borrowing" of proteins produced in other parts of the body. Later, these genes duplicated to create a sophisticated cocktail of venoms--a cocktail that varied from one lineage of snakes to another."
November 24, 2005
Supernovae Back Einstein's "Blunder"
When Albert Einstein was working on his equations for the theory of general relativity, he threw in a cosmological constant to bring the universe into harmonious equilibrium. But subsequent observations by Edwin Hubble proved that the universe was not static. Rather, galaxies were flying apart at varying speeds. Einstein abandoned the concept, calling it the biggest blunder of his life's work. Observations in the 1990s, however, proved that the universe was not only flying apart, it was doing so faster and faster. This seemed to point to a dark energy filling space that actually repelled ordinary matter with its gravity, in contrast to all other known stuff, including dark matter. A number of theories have been developed to explain what this dark energy might be, including Einstein's long discarded cosmological constant.