Monday, September 15, 2014
A Rank River Ran Through It
It says something about a city, I suppose, when there is heated debate over who first labeled it a dirty place. The phrase “dear dirty Dublin”, used as a badge of defiant honor in Ireland’s capital to this day, is often erroneously attributed to James Joyce. Joyce used the term in Dubliners (1914) a series of linked short stories about that city and its denizens. But the phase goes back at least to early nineteenth century and the literary circle surrounding Irish novelist Sydney Owenson (Lady Morgan) who remains best known for her novel The Wild Irish Girl (1806) which extols the virtues of wild Irish landscapes, and the wild, though naturally dignified, princess who lived there. Compared to the fresh wilderness of the Irish West, Dublin would have seemed dirty indeed.
The city into which I was born more than a century later was still a rough and tumble place. It was also heavily polluted. This was Dublin of the 1970s.
My earliest memories of the city center come from trips I took to my father’s office in Marlborough St, just north of the River Liffey which bisects the city. My father would take an eccentric route into the city, the “back ways” as he would call them, which though not getting us to the destination as promptly as he advertised, had the benefit of bringing us on a short tour of the city and its more unkempt quarters.
My father’s cars themselves were masterpieces of dereliction. Purchased when they were already in an advanced stage of decay, he would nurse them aggressively till their often fairly prompt demise. One car that he was especially proud of, a Volkswagen Type III fastback, which had its engine to the rear, developed transmission problems and its clutch failed. His repair consisted of a chord dangling over his shoulder and crossing the back seat into the engine. A tug at a precisely timed moment would shift the gears. A shoe, attached to the end of the chord and resting on my father’s shoulder, aided the convenient operation of this system. That car, like most the others in those less regulated times, was also a marvel of pollution generation, farting out clouds of blue-black exhaust which added to the billowy haze of leaded fumes issuing from the other disastrously maintained vehicles, all shuddering in and out of the city’s congested center at the beginning at end of each work day.
A route into the city that I especially liked took us west of the city center, and as we approached Christ Church Cathedral I would open the window to smell the roasting of the barley which emanated from the Guinness brewery in Liberties region of the city, down by the Liffey. Very promptly I would wind up the window again as we crossed over the bridge, since the reek of that river was legendarily bad.
The Irish playwright Brendan Behan wrote in his memoir Confessions of an Irish Rebel (1965), “Somebody once said that ‘Joyce has made of this river the Ganges of the literary world,’ but sometimes the smell of the Ganges of the literary world is not all that literary.”
Historically, the River Liffey received raw sewage from the city and though a medical report from the 1880s concluded that the Liffey was not “directly injurious to the health of the inhabitants” — in the opinion of these doctors crowded living and alcohol consumption were the main culprits — the report concluded nonetheless that the Liffey’s condition “is prejudicial to the interest of the city and the port of Dublin.” It was time to clear up the mess.
The smell of the Liffey like other polluted waterways came not just from the ingredients that spill into it, but also from algae that bloom upon the excess nutrients that both accompany the solid waste and that seeps into the water from the larger landscape. The death and sulfurous decay of those plants, contribute to those noisome aromas.
Despite the installation of a sewage system for the city in 1906 and its expansion in the 1940s and 1950s the smell of the river remained ripe as Brendan Behan attested. Even in the late 1970s the smell of the river persisted and was remarked upon in popular culture. The song “Summer in Dublin” by the band Bagatelle contains the lines, “I remember that summer in Dublin/And the Liffey it stank like hell.” It was a big hit in the summer of 1978.
So why did the smell persist? Part of the problem with the tenacity of the Liffey’s pollution, and its associated odors, is that the river is a tidal one. It ebbs and flows into polluted Dublin Bay into which raw sewage continued to be dumped long after the creation and expansion of municipal sewage treatment plants. The rancid smells of the River Liffey remained powerful as I was motored over it with my father in the 1970s.
On other occasions, this time with my mother, I would get to observe the streets of Dublin city at a leisurely pedestrian pace. She would take one of her six kids into the city on her Saturday morning shopping rounds and would walk the selected child into the ground. The footpaths of the city were strewn with litter — sweet wrappers, newspapers, paper bags, plastic bags, discarded fast-food, random scraps of paper, cigarette butts — dog feces dappled the curbs, vomit pooled in doorways, the narrow streets were car-congested, and at evening-time, snug on the smoke-belching bus trundling home, I’d watch the sun sinking, gloriously crimson, hazily defined, leaving behind the bituminously smoky atmosphere of Dublin for another day.
It seemed like there was no end in sight to Dublin’s pollution problem, but clearly the situation could not have been left to go on forever. And even if a nineteenth century medical commission was not impressed that Dublin’s environmental pollution, from the river at least, posed a grievous problem, nonetheless the ubiquitous squalor of the city was not conducive to the good health of the Dublin’s city. The stench of river, the garbage in the streets, the smog of the city had to be remediated. As one Reuters report from the autumn of 1988 reported: “A thick pall of smoke from thousands of coal fires has become trapped over Dublin in freezing, wind-free weather, leaving a million coughing Dubliners to face streets at midday so gloomy it looks as if night had already fallen.” The links between high levels of smog and increased death rates concerned the medical community and a spokesperson from a major Dublin hospital reported that "Even patients without respiratory complaints have been complaining about throat irritation and coughing." (Toronto Star).
So change eventually came, some of it, admittedly, compelled by European legislation, a reasonable price for Ireland’s economic union with Europe. Acting on the Air Pollution Act, 1987 the capital city was declared a smokeless zone in 1990. It became illegal to sell or distribute bituminous coal, the smokiest kind, in all parts of Dublin city and its suburbs. By the early 1990s the city had lost the aroma of soot and the Dublin sunset lost some of its luster, but, in compensation, its air quality dramatically improved. The smoke in Dublin city dropped from 192 mg per cubic meter of air in December, 1989, to a mere 48 microgrammes the following December.
The River Liffey is generally less aromatic these days, though it is still very much a polluted urban river. Massive improvements, including the building of a new treatment plant near the harbor about ten years ago, has reduced raw sewage both in the river and in Dublin Bay. That being said the levels of faecal coliform, that is, E coli, associated with human waste, remains "disturbingly excessive" in some stretches of the River Liffey. There are heavy odors emanating from the new plant, an expensive problem that will need to be resolved.
I glanced down at the river this past summer while I was visiting home and saw that garbage still bobs up and down in the tidal waters, or clings to the algae at its bricked-up banks, before being inexorably tugged out to sea.
Follow me on Twitter @DublinSoil for 140 character updates on my columns. Links to previous 3QD columns here.
Builders and Blocks - Engineering Blood Vessels with Stem Cells
by Jalees Rehman
Back in 2001, when we first began studying how regenerative cells (stem cells or more mature progenitor cells) enhance blood vessel growth, our group as well as many of our colleagues focused on one specific type of blood vessel: arteries. Arteries are responsible for supplying oxygen to all organs and tissues of the body and arteries are more likely to develop gradual plaque build-up (atherosclerosis) than veins or networks of smaller blood vessels (capillaries). Once the amount of plaque in an artery reaches a critical threshold, the oxygenation of the supplied tissues and organs becomes compromised. In addition to this build-up of plaque and gradual decline of organ function, arterial plaques can rupture and cause severe sudden damage such as a heart attack. The conventional approach to treating arterial blockages in the heart was to either perform an open-heart bypass surgery in which blocked arteries were manually bypassed or to place a tube-like "stent" in the blocked artery to restore the oxygen supply. The hope was that injections of regenerative cells would ultimately replace the invasive procedures because the stem cells would convert into blood vessel cells, form healthy new arteries and naturally bypass the blockages in the existing arteries.
As is often the case in biomedical research, this initial approach turned out to be fraught with difficulties. The early animal studies were quite promising and the injected cells appeared to stimulate the growth of blood vessels, but the first clinical trials were less successful. It was very difficult to retain the injected cells in the desired arteries or tissues, and even harder to track the fate of the cells. Which stem cells should be injected? Where should they be injected? How many? Can one obtain enough stem cells from an individual patient so that one could use his or her own cells for the cell therapy? How does one guide the injected cells to the correct location, and then guide the cells to form functional blood vessel structures? Would the stem cells of a patient with chronic diseases such as diabetes or high blood pressure be suitable for therapies, or would such a patient have to rely on stem cells from healthier individuals and thus risk the complication of immune rejection?
The complexity of blood-vessel generation became increasingly apparent, both when studying the biology of stem cells as well as when designing and conducting clinical trials. A large clinical study published in 2013 studied the impact of bone marrow cell injections in heart attack patients and concluded that these injections did not result in any sustained benefit for heart function. Other studies using injections of patients' own stem cells into their hearts had led to mild improvements in heart function, but none of these clinical studies came close to fulfilling the expectations of cardiovascular patients, physicians and researchers. The upside to these failed expectations was that it forced the researchers in the field of cardiovascular regeneration to rethink their goals and approaches.
One major shift in my own field of interest - the generation of new blood vessels – was to reevaluate the validity of relying on injections of cells. How likely was it that millions of injected cells could organize themselves into functional blood vessels? Injections of cells were convenient for patients because they would not require the surgical implantation of blood vessels, but was this attempt to achieve a convenient therapy undermining its success? An increasing number of laboratories began studying the engineering of blood vessels in the lab by investigating the molecular cues which regulate the assembly of blood vessel networks, identifying molecular scaffolds which would retain stem cells and blood vessel cells and combining various regenerative cell types to build functional blood vessels. This second wave of regenerative vascular medicine is engineering blood vessels which will have to be surgically implanted into patients. This means that it will be much harder to get approval to conduct such invasive implantations in patients than the straightforward injections which were conducted in the first wave of studies, but most of us who have now moved towards a blood vessel engineering approach feel that there is a greater likelihood of long-term success even if it may take a decade or longer till we obtain our first definitive clinical results.
The second conceptual shift which has occurred in this field is the realization that blood vessel engineering is not only important for treating patients with blockages in their arteries. In fact, blood vessel engineering is critical for all forms of tissue and organ engineering. In the US, more than 120,000 people are awaiting an organ transplant but only a quarter of them will receive an organ in any given year. The number of people in need of a transplant will continue to grow but the supply of organs is limited and many patients will unfortunately die while waiting for an organ which they desperately need. The advances in stem cell biology have made it possible to envision creating organs or organoids (functional smaller parts of an organ) which could help alleviate the need for organs. One thing that most organs and tissues need is a network of tiny blood vessels that permeate the whole tissue: small capillary networks. For example, a liver built out of liver cells could never function without a network of tiny blood vessels which supply the liver cells with metabolites and oxygen. From an organ engineering point of view, microvessel engineering is just as important as the building of functional arteries.
In one of our recent projects, we engineered functional human blood vessels by combining bone marrow derived stem cells with endothelial cells (the cells which coat the inside of all blood vessels). It turns out that stem cells do not become endothelial cells but instead release a molecular signal – the protein SLIT3- which instructs the endothelial cells to assemble into networks. Using a high resolution microscope, we watched this process in real-time over a course of 72 hours in the laboratory and could observe how the endothelial cells began lining up into tube-like structures in the presence of the bone marrow stem cells. The human endothelial cells were like building blocks, the human bone marrow stem cells were the builders "overseeing" the construction. When we implanted the assembled blood vessel structures into mice, we could see that they were fully functional, allowing mouse blood to travel through them without leaking or causing any other major problems (see image, taken from reference 3).
I am sure that SLIT3 is just one of many molecular cues released by the stem cells to assemble functional networks and there are many additional mechanisms which still need to be discovered. We still need to learn much more about which "builders" and which "building blocks" are best suited for each type of blood vessel that we want to construct. The fact that human fat tissue can serve as an important resource for obtaining adult stem cells("builders") is quite encouraging, but we still know very little about the overall longevity of the engineered vessels, the best way to implant them into patients, and the key molecular and biomechanical mechanisms which will be required to engineer organs with functional blood vessels. It will be quite some time until the first fully engineered organs will be implanted in humans, but the dizzying rate of progress suggests that we can be quite optimistic.
References and links:
1. An overview article in "The Scientist" which describes the importance of blood vessel engineering for organ engineering (open access – can be read free of charge):
J Rehman "Building Flesh and Blood", The Scientist (2014), 28(5):48-53
2. An unusual and abundant source of adult stem cells which promote the formation of blood vessels: Fat tissue obtained from individuals who undergo a liposuction! (open access – can be read free of charge)
J Rehman "The Power of Fat" Aeon Magazine (2014)
3. The study which describes how adult stem cells release a protein (SLIT3) which organizes blood vessel cells into functional networks (open access – can be read free of charge):
J.D. Paul et al., "SLIT3-ROBO4 activation promotes vascular network formation in human engineered tissue and angiogenesis in vivo" J Mol Cell Cardiol (2013), 64:124-31.
Monday, August 18, 2014
The Psychology of Procrastination: How We Create Categories of the Future
by Jalees Rehman
"Do not put your work off till tomorrow and the day after; for a sluggish worker does not fill his barn, nor one who puts off his work: industry makes work go well, but a man who puts off work is always at hand-grips with ruin." Hesiod in "The Works and Days"
Paying bills, filling out forms, completing class assignments or submitting grant proposals – we all have the tendency to procrastinate. We may engage in trivial activities such as watching TV shows, playing video games or chatting for an hour and risk missing important deadlines by putting off tasks that are essential for our financial and professional security. Not all humans are equally prone to procrastination, and a recent study suggests that this may in part be due to the fact thatthe tendency to procrastinate has a genetic underpinning. Yet even an individual with a given genetic make-up can exhibit a significant variability in the extent of procrastination. A person may sometimes delay initiating and completing tasks, whereas at other times that same person will immediately tackle the same type of tasks even under the same constraints of time and resources.
A fully rational approach to task completion would involve creating a priority list of tasks based on a composite score of task importance and the remaining time until the deadline. The most important task with the most proximate deadline would have to be tackled first, and the lowest priority task with the furthest deadline last. This sounds great in theory, but it is quite difficult to implement. A substantial amount of research has been conducted to understand how our moods, distractability and impulsivity can undermine the best laid plans for timely task initiation and completion. The recent research article "The Categorization of Time and Its Impact on Task Initiation" by the researchers Yanping Tu (University of Chicago) and Dilip Soman (University of Toronto) investigates a rather different and novel angle in the psychology of procrastination: our perception of the future.
Tu and Soman hypothesized that one reason for why we procrastinate is that we do not envision time as a linear, continuous entity but instead categorize future deadlines into two categories, the imminent future and the distant future. A spatial analogy to this hypothesized construct is how we categorize distances. A city located at a 400 kilometer distance may be considered as being spatially closer to us if it is located within the same state than another city which may be physically closer (e.g. only 300 kilometers away) but located in a different state. The categories "in my state" and "outside of my state" therefore interfere with the perception of the actual physical distance.
In an experiment to test their time category hypothesis, the researchers investigated the initiation of tasks by farmers in a rural community in India as part of a larger project aimed at helping farmers develop financial literacy and skills. The participants (n=295 male farmers) attended a financial literacy lecture. The farmers learned that they would receive a special financial incentive if they opened a bank account, completed the required paperwork and accumulated at least 5,000 rupees in the account within the next 6 months. The farmers were also told they could open an account with zero deposit and complete the paperwork immediately while a bank representative was present at the end of the lecture. Alternatively, they could open the bank account at any point in time later by going to the closest branch of the bank. These lectures were held in June 2010 as well as in July 2010. In both cases, the six-month deadline was explicitly stated as being in December 2010 (for the June lectures) and in January 2011 (for the July lectures). The researchers surmised that even though the farmers were given the same six-month period to open the account and save the money, the December 2010 deadline would be perceived as the imminent future or an extension of the present because it fell in the same calendar year (2010) as the lecture, whereas the January 2011 deadline would be perceived as a far-off date in the distant future because it would fall in the next calendar year.
The results of this experiment were quite astounding: 32% of the farmers with the December 2010 deadline immediately opened the bank account whereas only 8% of the farmers with the January 2011 deadline followed suit. The contrast was even starker when it came to actually completing the whole task and saving the required money. 28% of the farmers with the December 2010 deadlines succeeded whereas only 4% of the farmers with the January 2011 deadline were successful. Even though both groups were given the same timeframe to complete the task (exactly six months) the same-year group had a six-to-seven fold higher success rate!
To test whether their idea of time categorization into the "like-the-present" future and the distant future could be generalized, the researchers conducted additional studies with students at the University of Toronto and the University of Chicago. These experiments yielded similar results, but also revealed that the distinction between "like-the-present" and the distant future is not only tied to the end of the calendar year but can also occur at the end of the month. Participants who were asked in April to complete a task with a deadline on April 30th indicated a far greater willingness to initiate the task than those with a deadline of May 1st, presumably because the April group thought of the deadline being an extension of the present (the month of April).
One of the most interesting experiments in their set of studies was the investigation of whether one could tweak the temporal perception of a deadline by providing visual cues which link the future date to the present. Tu and Soman conducted the study on March 9, 2011 (a Wednesday) and told participants that the study was about judging actions. The text provided to the participants read,
"Any action can be described in many ways; however the appropriateness of these descriptions may largely depend on the occasion on which the action occurs. In today's study, we are interested in your judgment of the appropriateness of descriptions of several actions. Please pick the one that you think is most appropriate in the occasion that is given to you in this study."
The researchers then showed the participants a calendar of March 2011 and told them that all the given actions would occur on March 13, 2011 (a Sunday). But the participants were divided into two groups, half of whom received a calendar in which the whole week was highlighted in one color, thus emphasizing that the Sunday deadline belonged to the same week ("like-the-present group"). The control group received a standard calendar in which the week-ends were colored differently from working days. The participants were provided with a list of 25 tasks and given two options for how they would describe each task. The two options reflected either a hands-on implementation approach versus more abstract approach. For example, for the task of "Caring for houseplants", they could choose between the hands-on option "Watering plants" or the more abstract option "Making the room look nice". Participants who saw the calendar in which the whole week (including Sunday) was depicted in the same color were significantly more likely to choose implementation options, suggesting that the visual cue was prepping their mind to think in terms of already implementing the tasks.
The work by Tu and Soman makes a strong case for the idea that we think of the future in categories and that this has a major impact on our tendency to procrastinate and take charge and expediently initiate and complete tasks. However, the work does have some limitations such as the fact that the researchers did not investigate whether the initial categorization is modified over time and whether specific reminders can help change the categorization. For example, if the farmers with the January 2011 deadline were to be approached again in the beginning of January 2011, would they then re-evaluate the "remote future" deadline and now consider it to be a "like-the-present" deadline that needs to be addressed immediately? Another limitation of the research article is that it does not explicitly state the ethical review of the studies, such as whether the farmers in India knew that their data was being used for a behavioral research study and whether provided informed consent.
This research provides fascinating insights into the science of procrastination and raises a number of important questions about how one should set deadlines. If the deadline is too far in the future, there is a much greater likelihood of thinking of it as a remote entity which may end up being ignored. If we want to ensure that tasks are initiated and completed in a timely manner, it may be important to emphasize the proximity of the deadline using visual cues (colors of calendars) or explicitly emphasizing the "like-the-present" nature such as stating "the deadline is in 30 days" instead of just mentioning a deadline date. The researchers did not study the impact of a countdown clock, but perhaps a countdown may be one way to help individuals build a cognitive bridge between the present and a looming deadline. Hopefully, government agencies, universities, corporations and other institutions which heavily rely on deadlines will pay attention to this research and re-evaluate how to convey deadlines in a manner which will reduce procrastination.
Yanping Tu and Dilip Soman (2014) "The Categorization of Time and Its Impact on Task Initiation" Journal of Consumer Research (published online on August 13, 2014 ahead of print).
Monday, August 11, 2014
How to say "No" to your doctor: improving your health by decreasing your health care
by Carol A. Westbrook
Has your doctor ever said to you, "You have too many doctors and are taking too many pills. It's time to cut back on both"? No? Well I have. Maybe it's time you brought it up with your doctors, too.
Do you really need a dozen pills a day to keep you alive, feeling well, and happy? Can you even afford them? Is it possible that the combination of meds that you are taking is making you feel worse, not better? Are you using up all of your sick leave and vacation time to attend multiple doctors' visits? Are you paying way much out of pocket for office visits and pharmacy co-pays, in spite of the fact that you have very good insurance? If this applies to you, then read on.
I am not referring to those of you with serious or chronic medical conditions, such as cancer, diabetes, and heart disease, who really do need those life-saving medicines and frequent clinic visits. I am referring here to the average healthy adult, who has no major medical problems, yet is taking perhaps twice as many prescription drugs and seeing multiple doctors 3 - 4 times as often as he would have done ten or fifteen years ago. Is he any healthier for it?
There is no doubt that modern medical care has made a tremendous impact on keeping us healthy and alive. The average life expectancy has increased dramatically over the last half century, from about 67 years in 1950 to almost 78 years today, and those who live to age 65 can expect to have, on average, almost 18 additional years to live! Some of this is due to lifestyle changes but most of the gain is due to advances in medical care, especially in two areas: cardiac disease and infectious diseases, especially in the treatment of AIDS. Cancer survival is just starting to make an impact as well. But how much additional longevity can we expect to gain by piling even more medical care on healthy individuals?
Too much health care can lower rather than improve your quality of life, and possibly even shorten it. For example, women who are given estrogens to relieve menopause symptoms have a significant risk of breast cancer. Blood pressure medicines can lead to unrecognized fatigue and depression; the same can be seen with sleeping pills, muscle relaxants, and anti-anxiety meds. Unnecessary X-rays or scans can lead to unneeded biopsies, which might result in serious complications. Even yearly PSA screening for prostate cancer can harm more men than it helps. Testosterone supplements can result in dangerously high blood counts. And of course, the money you spend on medications can be substantial, and the extra time you spend going to an office visit cuts into your leisure time and your income--directly impacting your quality of life.
How do you, the patient, break this cycle? First, you have to understand its cause. I'm sure you won't be surprised by my answer, which is "money." The "medical-industrial complex," operates on a fee-for-service business concept, and the way to increase profits is to increase services.
In the not-too-distant past, a person would have one General Practitioner (GP) or Primary Care Physician (PCP) who oversaw his health care. The GP would triage emergencies, treat chronic conditions such as hypertension, anemia or diabetes, diagnose new conditions that need intervention, and, when needed, refer the patient to a specialist for a visit or two. Extremely efficient for the patient, and somewhat time-consuming for the physician who, of course, would be reimbursed for his time. But today, private insurance and the CMS (Center for Medicare and Medicaid), the federal oversight agency, set limits on what can be charged for clinic visits by a GP vs. a specialist, sets costs for procedures, limits the allowable length of a clinic visit, and determines what diagnoses will be covered and what won't. From an economic perspective, this payment system incentivizes multiple short doctor visits to specialists rather than one-stop shopping with a GP. The resultant fragmentation of health care leads to more treatment, more medication, and poor coordination of care (see "The Bystander Effect in Medical Care: Why do I have so many doctors not taking care of me?" May 20, 2013).
The paradigm has shifted from "one patient, one doctor, many diagnoses" to "one patient, many diagnoses, and a doctor for each diagnosis." And with each new doctor comes a new set of medications, and many more return office visits, of which many are done by mid-level providers, that is, nurse practitioners or physician assistants. Mid-level providers tend to perpetuate the status quo; they can speed a patient quickly through a routine clinic visit, but may not have the medical expertise to diagnose new problems, further increasing referrals to specialists. The latest innovation in health care, electronic medical records, further perpetuate medical inertia by including no-brainer "check boxes" for return clinic visits, automatic prescription renewals, and referrals to other specialists in the system.
How can you, the patient, insure that you are getting only the amount of health care you need? It's not a good idea to stop medications on your own, and it can be intimidating to confront your doctor for advice on how to do with less of him! But if you are serious about cutting back on health care, start with the following steps:
1. Be familiar each medicine you are taking--its name, what it does, and what condition it is treating.
2. For each medication, do you still have the condition for which it was prescribed? If not, would the condition return if the medication were stopped? (Examples are hypertension, thyroid disease and diabetes). Was it prescribed for a short course of treatment that is completed, but no one bothered to discontinue the prescription? For example, if you were put on arthritis medication for a bad knee, and you subsequently had a knee replacement, the pain med should have been stopped.
3. Are you taking multiple medications for a single condition when perhaps one might suffice? Sometimes all that is needed are dose adjustments. For example, getting the correct dose of a blood pressure medication might require many re-checks and frequent dose changes, and it is easier for a provider to merely add a second or third pill.
4. Are some of your medications expensive, or have high co-pays? For each class of drug (e.g. antibiotics, sleeping pills, acid-reducers, cholesterol medication) your insurance company has a preferred choice. See if your doctor can switch to that one instead. You might need to ask your pharmacist, or call the insurance company directly, to get their list, and then ask the prescribing doctor if it's appropriate and, if so, to change the prescription (and cancel the other one).
5. How many doctors do you see regularly? In particular, how many specialists are you seeing and how often? Find out what is the purpose of any return visits they schedule, and whether some of this can be done by phone or electronic messaging. Or better yet, can the follow up be done by your PCP? Or has the problem been resolved and you are a victim of the "return to clinic" check box? You may have to make an extra visit to the specialist to get this information and end the relationship.
Once you get this information, here are some steps you can then take:
1. Discontinue as many medications as you can, or switch to acceptable, cheaper alternatives, with your doctor's assistance.
2. Review your personal list of prescribed medications, and compare it to the one in the medical record at your doctor's office. Remove all medications from the list that you are not actively taking, or that have already been discontinued, and make sure this is reflected in the medical record. And by all means, confirm that it is not on auto-renewal at your pharmacy.
3. Cut down the number of doctor's visit, once you have determined which specialists you need to see, and which one don't add anything to your health care.
4. Prioritize and simplify your ongoing medical care. Mid-level practitioners are great for maintenance of existing chronic conditions, but when a condition changes, or there is a new problem, insist on seeing the doctor instead. (Most of my inappropriate referrals come from mid-levels who are trying to solve a problem they don't have the training to solve.)
5. Ask your PCP to interpret and prioritize your visits to specialists, and for the specialist to discuss and coordinate your care with your PCP. If your PCP is not accessible or interested, consider finding another one.
6. Make use of electronic messaging, email, or phone calls when possible, to replace clinic visits.
7. Adopt lifestyle changes suggested by your doctor that might help you avoid taking additional medication, such as weight loss, exercise, smoking cessation, diet modification. If you go through with this, ask for feedback from your doctor, who should be willing to re-evaluate your meds and your health--after all, he suggested it.
Now let's turn the tables and see how difficult this can be for the doctor. When I see someone who is stuck in the web of medical inertia, I may say, "You have too many doctors and are taking too many pills. It's time to cut back on both." I am often met with resistance. Surprisingly, many people prefer to continue on the way they are. They don't want to hear that they don't need all these medications, or that their symptoms are due to depression or anxiety. They would rather take a pill than stop smoking, or lose weight.
For the rest, I do my best to help. I'm reluctant to stop medications started by another doctor; however, I can offer to help review medications and diagnoses. I can contact the doctor and see if the medication is necessary. I'll help to find cheaper alternatives when I can. As a rule, I don't renew medications that I didn't originally prescribe. For patients whose condition I am managing, I'll try to do a lot of my follow up by email or messaging, taking advantage of the electronic record. Every little bit helps.
Cutting back on medical care is a slow process on an individual level, and we physicians are just as frustrated as you are with the excesses in the system. The situation is not going to be improved by more insurance, but by reform of the entire system--which is unlikely to happen in my lifetime unless patients get involved and start demanding a change.
When I brought up this topic with friends, I was amazed to find how many had stories to tell about their personal experience with excessive health care. Do you, too, want to make a change? Please feel free to share your stories here. Maybe we can start to make a difference.
The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
Monday, June 30, 2014
The Road to Bad Science Is Paved with Obedience and Secrecy
by Jalees Rehman
We often laud intellectual diversity of a scientific research group because we hope that the multitude of opinions can help point out flaws and improve the quality of research long before it is finalized and written up as a manuscript. The recent events surrounding the research in one of the world's most famous stem cell research laboratories at Harvard shows us the disastrous effects of suppressing diverse and dissenting opinions.
The infamous "Orlic paper" was a landmark research article published in the prestigious scientific journal Nature in 2001, which showed that stem cells contained in the bone marrow could be converted into functional heart cells. After a heart attack, injections of bone marrow cells reversed much of the heart attack damage by creating new heart cells and restoring heart function. It was called the "Orlic paper" because the first author of the paper was Donald Orlic, but the lead investigator of the study was Piero Anversa, a professor and highly respected scientist at New York Medical College.
Anversa had established himself as one of the world's leading experts on the survival and death of heart muscle cells in the 1980s and 1990s, but with the start of the new millennium, Anversa shifted his laboratory's focus towards the emerging field of stem cell biology and its role in cardiovascular regeneration. The Orlic paper was just one of several highly influential stem cell papers to come out of Anversa's lab at the onset of the new millenium. A 2002 Anversa paper in the New England Journal of Medicine – the world's most highly cited academic journal –investigated the hearts of human organ transplant recipients. This study showed that up to 10% of the cells in the transplanted heart were derived from the recipient's own body. The only conceivable explanation was that after a patient received another person's heart, the recipient's own cells began maintaining the health of the transplanted organ. The Orlic paper had shown the regenerative power of bone marrow cells in mouse hearts, but this new paper now offered the more tantalizing suggestion that even human hearts could be regenerated by circulating stem cells in their blood stream.
A 2003 publication in Cell by the Anversa group described another ground-breaking discovery, identifying a reservoir of stem cells contained within the heart itself. This latest coup de force found that the newly uncovered heart stem cell population resembled the bone marrow stem cells because both groups of cells bore the same stem cell protein called c-kit and both were able to make new heart muscle cells. According to Anversa, c-kit cells extracted from a heart could be re-injected back into a heart after a heart attack and regenerate more than half of the damaged heart!
These Anversa papers revolutionized cardiovascular research. Prior to 2001, most cardiovascular researchers believed that the cell turnover in the adult mammalian heart was minimal because soon after birth, heart cells stopped dividing. Some organs or tissues such as the skin contained stem cells which could divide and continuously give rise to new cells as needed. When skin is scraped during a fall from a bike, it only takes a few days for new skin cells to coat the area of injury and heal the wound. Unfortunately, the heart was not one of those self-regenerating organs. The number of heart cells was thought to be more or less fixed in adults. If heart cells were damaged by a heart attack, then the affected area was replaced by rigid scar tissue, not new heart muscle cells. If the area of damage was large, then the heart's pump function was severely compromised and patients developed the chronic and ultimately fatal disease known as "heart failure".
Anversa's work challenged this dogma by putting forward a bold new theory: the adult heart was highly regenerative, its regeneration was driven by c-kit stem cells, which could be isolated and used to treat injured hearts. All one had to do was harness the regenerative potential of c-kit cells in the bone marrow and the heart, and millions of patients all over the world suffering from heart failure might be cured. Not only did Anversa publish a slew of supportive papers in highly prestigious scientific journals to challenge the dogma of the quiescent heart, he also happened to publish them at a unique time in history which maximized their impact.
In the year 2001, there were few innovative treatments available to treat patients with heart failure. The standard approach was to use medications that would delay the progression of heart failure. But even the best medications could not prevent the gradual decline of heart function. Organ transplants were a cure, but transplantable hearts were rare and only a small fraction of heart failure patients would be fortunate enough to receive a new heart. Hopes for a definitive heart failure cure were buoyed when researchers isolated human embryonic stem cells in 1998. This discovery paved the way for using highly pliable embryonic stem cells to create new heart muscle cells, which might one day be used to restore the heart's pump function without resorting to a heart transplant.
The dreams of using embryonic stem cells to regenerate human hearts were soon squashed when the Bush administration banned the generation of new human embryonic stem cells in 2001, citing ethical concerns. These federal regulations and the lobbying of religious and political groups against human embryonic stem cells were a major blow to research on cardiovascular regeneration. Amidst this looming hiatus in cardiovascular regeneration, Anversa's papers appeared and showed that one could steer clear of the ethical controversies surrounding embryonic stem cells by using an adult patient's own stem cells. The Anversa group re-energized the field of cardiovascular stem cell research and cleared the path for the first human stem cell treatments in heart disease.
Instead of having to wait for the US government to reverse its restrictive policy on human embryonic stem cells, one could now initiate clinical trials with adult stem cells, treating heart attack patients with their own cells and without having to worry about an ethical quagmire. Heart failure might soon become a disease of the past. The excitement at all major national and international cardiovascular conferences was palpable whenever the Anversa group, their collaborators or other scientists working on bone marrow and cardiac stem cells presented their dizzyingly successful results. Anversa received numerous accolades for his discoveries and research grants from the NIH (National Institutes of Health) to further develop his research program. He was so successful that some researchers believed Anversa might receive the Nobel Prize for his iconoclastic work which had redefined the regenerative potential of the heart. Many of the world's top universities were vying to recruit Anversa and his group, and he decided to relocate his research group to Harvard Medical School and Brigham and Women's Hospital 2008.
There were naysayers and skeptics who had resisted the adult stem cell euphoria. Some researchers had spent decades studying the heart and found little to no evidence for regeneration in the adult heart. They were having difficulties reconciling their own results with those of the Anversa group. A number of practicing cardiologists who treated heart failure patients were also skeptical because they did not see the near-miraculous regenerative power of the heart in their patients. One Anversa paper went as far as suggesting that the whole heart would completely regenerate itself roughly every 8-9 years, a claim that was at odds with the clinical experience of practicing cardiologists. Other researchers pointed out serious flaws in the Anversa papers. For example, the 2002 paper on stem cells in human heart transplant patients claimed that the hearts were coated with the recipient's regenerative cells, including cells which contained the stem cell marker Sca-1. Within days of the paper's publication, many researchers were puzzled by this finding because Sca-1 was a marker of mouse and rat cells – not human cells! If Anversa's group was finding rat or mouse proteins in human hearts, it was most likely due to an artifact. And if they had mistakenly found rodent cells in human hearts, so these critics surmised, perhaps other aspects of Anversa's research were similarly flawed or riddled with artifacts.
At national and international meetings, one could observe heated debates between members of the Anversa camp and their critics. The critics then decided to change their tactics. Instead of just debating Anversa and commenting about errors in the Anversa papers, they invested substantial funds and efforts to replicate Anversa's findings. One of the most important and rigorous attempts to assess the validity of the Orlic paper was published in 2004, by the research teams of Chuck Murry and Loren Field. Murry and Field found no evidence of bone marrow cells converting into heart muscle cells. This was a major scientific blow to the burgeoning adult stem cell movement, but even this paper could not deter the bone marrow cell champions.
Despite the fact that the refutation of the Orlic paper was published in 2004, the Orlic paper continues to carry the dubious distinction of being one of the most cited papers in the history of stem cell research. At first, Anversa and his colleagues would shrug off their critics' findings or publish refutations of refutations – but over time, an increasing number of research groups all over the world began to realize that many of the central tenets of Anversa's work could not be replicated and the number of critics and skeptics increased. As the signs of irreplicability and other concerns about Anversa's work mounted, Harvard and Brigham and Women's Hospital were forced to initiate an internal investigation which resulted in the retraction of one Anversa paper and an expression of concern about another major paper. Finally, a research group published a paper in May 2014 using mice in which c-kit cells were genetically labeled so that one could track their fate and found that c-kit cells have a minimal – if any – contribution to the formation of new heart cells: a fraction of a percent!
The skeptics who had doubted Anversa's claims all along may now feel vindicated, but this is not the time to gloat. Instead, the discipline of cardiovascular stem cell biology is now undergoing a process of soul-searching. How was it possible that some of the most widely read and cited papers were based on heavily flawed observations and assumptions? Why did it take more than a decade since the first refutation was published in 2004 for scientists to finally accept that the near-magical regenerative power of the heart turned out to be a pipe dream.
One reason for this lag time is pretty straightforward: It takes a tremendous amount of time to refute papers. Funding to conduct the experiments is difficult to obtain because grant funding agencies are not easily convinced to invest in studies replicating existing research. For a refutation to be accepted by the scientific community, it has to be at least as rigorous as the original, but in practice, refutations are subject to even greater scrutiny. Scientists trying to disprove another group's claim may be asked to develop even better research tools and technologies so that their results can be seen as more definitive than those of the original group. Instead of relying on antibodies to identify c-kit cells, the 2014 refutation developed a transgenic mouse in which all c-kit cells could be genetically traced to yield more definitive results - but developing new models and tools can take years.
The scientific peer review process by external researchers is a central pillar of the quality control process in modern scientific research, but one has to be cognizant of its limitations. Peer review of a scientific manuscript is routinely performed by experts for all the major academic journals which publish original scientific results. However, peer review only involves a "review", i.e. a general evaluation of major strengths and flaws, and peer reviewers do not see the original raw data nor are they provided with the resources to replicate the studies and confirm the veracity of the submitted results. Peer reviewers rely on the honor system, assuming that the scientists are submitting accurate representations of their data and that the data has been thoroughly scrutinized and critiqued by all the involved researchers before it is even submitted to a journal for publication. If peer reviewers were asked to actually wade through all the original data generated by the scientists and even perform confirmatory studies, then the peer review of every single manuscript could take years and one would have to find the money to pay for the replication or confirmation experiments conducted by peer reviewers. Publication of experiments would come to a grinding halt because thousands of manuscripts would be stuck in the purgatory of peer review. Relying on the integrity of the scientists submitting the data and their internal review processes may seem naïve, but it has always been the bedrock of scientific peer review. And it is precisely the internal review process which may have gone awry in the Anversa group.
Just like Pygmalion fell in love with Galatea, researchers fall in love with the hypotheses and theories that they have constructed. To minimize the effects of these personal biases, scientists regularly present their results to colleagues within their own groups at internal lab meetings and seminars or at external institutions and conferences long before they submit their data to a peer-reviewed journal. The preliminary presentations are intended to spark discussions, inviting the audience to challenge the veracity of the hypotheses and the data while the work is still in progress. Sometimes fellow group members are truly skeptical of the results, at other times they take on the devil's advocate role to see if they can find holes in their group's own research. The larger a group, the greater the chance that one will find colleagues within a group with dissenting views. This type of feedback is a necessary internal review process which provides valuable insights that can steer the direction of the research.
Considering the size of the Anversa group – consisting of 20, 30 or even more PhD students, postdoctoral fellows and senior scientists – it is puzzling why the discussions among the group members did not already internally challenge their hypotheses and findings, especially in light of the fact that they knew extramural scientists were having difficulties replicating the work.
Retraction Watch is one of the most widely read scientific watchdogs which tracks scientific misconduct and retractions of published scientific papers. Recently, Retraction Watch published the account of an anonymous whistleblower who had worked as a research fellow in Anversa's group and provided some unprecedented insights into the inner workings of the group, which explain why the internal review process had failed:
"I think that most scientists, perhaps with the exception of the most lucky or most dishonest, have personal experience with failure in science—experiments that are unreproducible, hypotheses that are fundamentally incorrect. Generally, we sigh, we alter hypotheses, we develop new methods, we move on. It is the data that should guide the science.
In the Anversa group, a model with much less intellectual flexibility was applied. The "Hypothesis" was that c-kit (cd117) positive cells in the heart (or bone marrow if you read their earlier studies) were cardiac progenitors that could: 1) repair a scarred heart post-myocardial infarction, and: 2) supply the cells necessary for cardiomyocyte turnover in the normal heart.
This central theme was that which supplied the lab with upwards of $50 million worth of public funding over a decade, a number which would be much higher if one considers collaborating labs that worked on related subjects.
In theory, this hypothesis would be elegant in its simplicity and amenable to testing in current model systems. In practice, all data that did not point to the "truth" of the hypothesis were considered wrong, and experiments which would definitively show if this hypothesis was incorrect were never performed (lineage tracing e.g.)."
Discarding data that might have challenged the central hypothesis appears to have been a central principle.
According to the whistleblower, Anversa's group did not just discard undesirable data, they actually punished group members who would question the group's hypotheses:
"In essence, to Dr. Anversa all investigators who questioned the hypothesis were "morons," a word he used frequently at lab meetings. For one within the group to dare question the central hypothesis, or the methods used to support it, was a quick ticket to dismissal from your position."
The group also created an environment of strict information hierarchy and secrecy which is antithetical to the spirit of science:
"The day to day operation of the lab was conducted under a severe information embargo. The lab had Piero Anversa at the head with group leaders Annarosa Leri, Jan Kajstura and Marcello Rota immediately supervising experimentation. Below that was a group of around 25 instructors, research fellows, graduate students and technicians. Information flowed one way, which was up, and conversation between working groups was generally discouraged and often forbidden.
Raw data left one's hands, went to the immediate superior (one of the three named above) and the next time it was seen would be in a manuscript or grant. What happened to that data in the intervening period is unclear.
A side effect of this information embargo was the limitation of the average worker to determine what was really going on in a research project. It would also effectively limit the ability of an average worker to make allegations regarding specific data/experiments, a requirement for a formal investigation."
This segregation of information is a powerful method to maintain an authoritarian rule and is more typical for terrorist cells or intelligence agencies than for a scientific lab, but it would definitely explain how the Anversa group was able to mass produce numerous irreproducible papers without any major dissent from within the group.
In addition to the secrecy and segregation of information, the group also created an atmosphere of fear to ensure obedience:
"Although individually-tailored stated and unstated threats were present for lab members, the plight of many of us who were international fellows was especially harrowing. Many were technically and educationally underqualified compared to what might be considered average research fellows in the United States. Many also originated in Italy where Dr. Anversa continues to wield considerable influence over biomedical research.
This combination of being undesirable to many other labs should they leave their position due to lack of experience/training, dependent upon employment for U.S. visa status, and under constant threat of career suicide in your home country should you leave, was enough to make many people play along.
Even so, I witnessed several people question the findings during their time in the lab. These people and working groups were subsequently fired or resigned. I would like to note that this lab is not unique in this type of exploitative practice, but that does not make it ethically sound and certainly does not create an environment for creative, collaborative, or honest science."
Foreign researchers are particularly dependent on their employment to maintain their visa status and the prospect of being fired from one's job can be terrifying for anyone.
This is an anonymous account of a whistleblower and as such, it is problematic. The use of anonymous sources in science journalism could open the doors for all sorts of unfounded and malicious accusations, which is why the ethics of using anonymous sources was heavily debated at the recent ScienceOnline conference. But the claims of the whistleblower are not made in a vacuum – they have to be evaluated in the context of known facts. The whistleblower's claim that the Anversa group and their collaborators received more than $50 million to study bone marrow cell and c-kit cell regeneration of the heart can be easily verified at the public NIH grant funding RePORTer website. The whistleblower's claim that many of the Anversa group's findings could not be replicated is also a verifiable fact. It may seem unfair to condemn Anversa and his group for creating an atmosphere of secrecy and obedience which undermined the scientific enterprise, caused torment among trainees and wasted millions of dollars of tax payer money simply based on one whistleblower's account. However, if one looks at the entire picture of the amazing rise and decline of the Anversa group's foray into cardiac regeneration, then the whistleblower's description of the atmosphere of secrecy and hierarchy seems very plausible.
The investigation of Harvard into the Anversa group is not open to the public and therefore it is difficult to know whether the university is primarily investigating scientific errors or whether it is also looking into such claims of egregious scientific misconduct and abuse of scientific trainees. It is unlikely that Anversa's group is the only group that might have engaged in such forms of misconduct. Threatening dissenting junior researchers with a loss of employment or visa status may be far more common than we think. The gravity of the problem requires that the NIH – the major funding agency for biomedical research in the US – should look into the prevalence of such practices in research labs and develop safeguards to prevent the abuse of science and scientists.
Monday, May 12, 2014
When are you past your prime?
by Emrys Westacott
Recently I had a discussion with a couple of old friends–all of us middle-aged guys–about when one's powers start to decline. God only knows why this topic came up, but it seems to have become a hardy perennial of late. My friends argued that in just about all areas, physical and mental, we basically peak in our twenties, and by the time we turn forty we're clearly on the rocky road to decrepitude.
I disagreed. I concede immediately that this is true of most, perhaps all, physical abilities: speed, strength, stamina, agility, hearing, eyesight, the ability to recover from injury, and so on. The decline after forty may be slight and slow, but it's a universal phenomenon. Of course, we can become fitter through exercise and the eschewing of bad habits, but any improvement here is made possible by our being out of shape in the first place.
What about mental abilities? Again, it's pretty obvious that some of these typically decline after forty: memory, processing speed, the ability to think laterally, perhaps. Here too, the decline may be very gradual, but these capacities clearly do not seem to improve in middle age. Still, I think my friends focus too much on certain kinds of ability and generalize too readily from these across the rest of what we do with our minds. More specifically, I suspect they view the cognitive capabilities that figure prominently in and are especially associated with mathematics and science as somehow the core of thinking in general. Because of this, and because these capacities are more abstract and can be exercised before a person has acquired a great deal of experience or knowledge, certain abilities have come to be identified with sharpness as such, and one's performance at tasks involving quick mental agility or analytic problem solving is taken as a measure of one's raw intellectual horsepower.
A belief in pure abiity, disentangled from experiential knowledge, underlies notions like IQ. It has had a rather inglorious history, and it has been used at times to justify a distribution of educational resources favouring those who are already advantaged. Today it continues to interest those who prefer to see any assessments or evaluations expressed quantitatively wherever possible–-a preference that also reflects the current cultural hegemony of science. Yet what matters to us, really, shouldn't be abilities in the abstract--how quickly we can calculate, or how successfully we can recall information—but what we actually do with these or any other abilities we possess. Is there any reason to suppose that we make better use of what we've got before we're forty?
The prevailing view has long been that in the sciences people do their most important, original and creative work early. Einstein reportedly said that "a person who has not made his great contribution to science by the age of thirty will never do so." But he would say that, wouldn't he? After all, he worked out the theory of special relativity when he was twenty-six. But Einstein was perhaps generalizing hastily from his own case. A recent study entitled "Age and Scientific Genius," published by the National Bureau of Economic Research casts doubt on the prevailing view. After reviewing an extensive literature on the topic, the authors conclude:
In contrast to common perceptions, most great scientific contributions are not the product of precocious youngsters but rather come disproportionately in middle age. Moreover, perceptions that some fields, such as physics, feature systematically younger contributions than others do not stand up to empirical scrutiny.
Interestingly, the average age at which scientists produce their most important work is now several years older than it was in the early twentieth century when Einstein, Bohr, Heisenberg and co. were revolutionizing physics. One possible explanation of this is that at that time, because of the great paradigm shifts that had just taken place, young scientists didn't have to spend so much time learning about earlier theories that had been superseded. Today, however, the "burden of knowledge" that has to be assumed before one can expect to make an original contribution is greater.
But my main objection to my friends' claims about cognitive decline is not that they are wrong about the abilities central to scientific thinking, even if they are unduly pessimistic. After all, honesty obliges me to note that the same study of age and scientific genius cited above also makes this observation:
one of the salient features of Nobel Prize winners and great technological innovators over the 20th century is that, while contributions at young ages have become increasingly rare, the rate of decline in innovation potential later in life remains steep.
Sobering stuff if one happens to be, as the French say, d'un certain âge. No, in my view, the strongest objection to the claim that our mental powers peak in our twenties, or even in our thirties, is that in fields like literature, musical composition, and the visual arts, so many masterpieces are produced by people who are well past forty.
Now, as a philosopher I don't usually like to dirty my hands by doing empirical research, but in this case data is undeniably relevant. It's also interesting in its own right. Let's start with the visual arts. Since I don't claim any sort of expertise here, I took a shortcut andused as my representative sample the ten works that Guardian art critic Jonathan Jones considers "the greatest works of art ever." In two cases, the Chauvet cave paintings and the Parthenon sculptures, we can't say how old the artist was. But here are the other eight works, with the age of the artist when the work was completed given in brackets.
· Leonardo da Vinci, The Foetus in the Womb (c 58-61)
· Rembrandt, Self-Portrait with Two Circles (c 59-63)
· Jackson Pollock, One: Num ber 31 (38)
· Velázquez, Las Meninas (c 58)
· Picasso, Guernica (55)
· Michaelangelo (c 44-57)
· Cézanne, Mont Sainte-Victoire (painted 1902-4) (63-65)
Only two of these works were produced by artists under forty. And if Caravaggio and Pollack didn't produce too many more masterpieces after the one's mentioned here it wasn't necessarily due to declining powers: Caravaggio died at thirty-nine, Pollack at forty-four.
How about classical composers? Here, I didn't find a convenient list of "ten greatest compositions ever," so I simply made my own list of ten celebrated works by composers who had lived well beyond forty (which excludes the likes of Mozart, Mendelssohn, Schubert, and Chopin) and would figure high up on anyone's list of "greatest classical composers." The selection isn't random; it's made with a point to prove in mind. But I think it does that rather effectively since there is widespread agreement that the works mentioned are among the greatest produced by the composer in question. Again, the age of the composer when the work was completed is given in brackets.
· Bach, Mass in B (64)
· Handel, Messiah (57)
· Haydn, The Creation (66)
· Beethoven, Ninth Symphony (54)
· Verdi, Otello (74) [pictured]
· Wagner, Götterdämmerung (61)
· Tchaikovsky, Sixth Symphony (53)
· Dvorak, New World Symphony (52)
· Mahler, Das Lied von der Erde (48)
We might note in passing that several of these composers produced acclaimed masterpieces at an even later date (Verdi'sFalstaff, for instance, was completed when he was seventy-nine), and in some cases, the only thing preventing them doing this was that they dropped dead not long after finishing the work mentioned. Tchaikovsky, for instance died nine days after conducting the first performance of his sixth symphony.
Literature tells a similar story. Many writers have produced what is widely regarded as their finest work long past the age of forty. Feeding, as Wittgenstein says we shouldn't, on a diet of one-sided examples, drawn exclusively, I admit, from the Western canon, I offer the following fifteen instances to support my general point. The number in brackets is the age of the author when the work was published or finished.
· Sophocles, Oedipus at Colonus (c. 90)
· Dante, The Divine Comedy (49-53)
· Chaucer, The Canterbury Tales (55)
· Cervantes, Don Quixote Part I (57), Part II (67)
· Defoe, Robinson Crusoe (59)
· Swift, Gulliver's Travels (59)
· Eliot, Daniel Deronda (57)
· Hugo, Les Miserables (60)
· Tolstoy, Anna Karenina (49)
· Dostoyevsky, The Brothers Karamazov (59)
· Hardy, Tess of the D'Urbervilles (51)
· James, The Wings of a Dove (59)
· Wharton, The Age of Innocence (58)
· Morrison, Beloved (56)
One could extend this list pretty much indefinitely, but there is no need to given the status of the works mentioned, many of which represent their creator's most acclaimed artistic achievement. Of course, there are many literary masterpieces written by authors younger than forty, but it is remarkable how often, in such cases, the writer died young, quite possibly with their best works still to come. Jane Austen died at forty-one; Emily Bronte at thirty; Anton Chekhov at forty-four; Franz Kafka at thirty-nine. To be sure, there are some who produce their best work in their twenties or thirties and never produce much of comparable quality afterwards despite a long life. Melville published Moby Dick when he was thirty-two; Wordsworth had written nearly all his best poetry by the time he was forty. But such cases, while not exceptional, are certainly not typical. Anyway, my point is not to deny that great art can be produced by young people; it is to argue that the many great works of art produced by people in middle age and beyond support the idea that some of our important cognitive abilities can continue to grow rather than decline during those years.
On the face of it, I would say the evidence presented here falsifies the thesis that we are cognitively declining once we're past thirty, or even forty. But how might someone who wishes to defend this claim respond? Well, they might argue that after forty all our basic cognitive functions are indeed declining, but we are good at finding ways to compensate for this, rather as a soccer player in his mid-thirties masks his lack of pace with more astute positional awareness. But then the question arises: why not count this sort of ability as an important function that improves as one ages? Or they might argue that what makes the great achievements of the mature years possible is the greater knowledge base—both of skills (know how) and subject matter (know that) which long experience brings. To this one could respond in a similar manner, that making good use of one's experience is another cognitive function that often improves with age. And if that seems a little abstract, even casuistic, one could point to other, more specific abilities that it is plausible to believe can continue to develop in middle age and that help to explain mature achievements like Paradise Lost or The Brothers Karamzov: for instance, the capacity for empathy, objectivity, self-awareness, and a synthetic grasp of complex wholes—all of them elements of what we call wisdom.
Another objection to my argument could be that the geniuses I cite are not representative of humanity in general. Perhaps one of the things that differentiates them from us ordinary mortals is precisely the fact that their cognitive decline kicks in unusually late, which enables them to put their growing wealth of experience to exceptionally good use. Against this idea, though, I would argue that the evidence against a general deterioration of all one's basic faculties could be culled just as well from people working in many fields: sports coaches, politicians, lawyers, musicians, film-makers…..
Finally, anyone who thinks I've been criticizing a straw man can respond appropriately with a cheap ad hominem, pointing out that my thesis is patently self-serving, coming as it does from one who is much closer to sixty than to forty. In response, I would first remind the critic that the so-called straw men in question are good friends of mine and should not be treated so dismissively. And second, I will appeal to the authority of William James, who, in his famous essay "The Will to Believe," affirms that there are circumstances where "the desire for a certain kind of truth . . .brings about that special truth's existence."
Monday, April 28, 2014
Does Literary Fiction Challenge Racial Stereotypes?
by Jalees Rehman
A book is a mirror: if a fool looks in, do not expect an apostle to look out.
Georg Christoph Lichtenberg (1742-1799)
Reading literary fiction can be highly pleasurable, but does it also make you a better person? Conventional wisdom and intuition lead us to believe that reading can indeed improve us. However, as the philosopher Emrys Westacott has recently pointed out in his essay for 3Quarksdaily, we may overestimate the capacity of literary fiction to foster moral improvement. A slew of scientific studies have taken on the task of studying the impact of literary fiction on our emotions and thoughts. Some of the recent research has centered on the question of whether literary fiction can increase empathy. In 2013, Bal and Veltkamp published a paper in the journal PLOS One showing that subjects who read excerpts from literary texts scored higher on an empathy scale than those who had read a nonfiction text. This increase in empathy was predominantly found in the participants who felt "transported" (emotionally and cognitively involved) into the literary narrative. Another 2013 study published in the journal Science by Kidd and Castano suggested that reading literary fiction texts increased the ability to understand and relate to the thoughts and emotions of other humans when compared to reading either non-fiction or popular fiction texts.
Scientific assessments of how fiction affects empathy are fraught with difficulties and critics raise many legitimate questions. Do "empathy scales" used in psychology studies truly capture the psychological phenomenon of "empathy"? How long does the effect of reading literary fiction last and does it translate into meaningful shifts in behavior? How does one select appropriate literary fiction texts and control texts, and conduct such studies in a heterogeneous group of participants who probably have very diverse literary tastes? Kidd and Castano, for example, used an excerpt of The Tiger's Wife by Téa Obreht as a literary fiction text because the book was a finalist for the National Book Award, whereas an excerpt of Gone Girl by Gillian Flynn was used as a ‘popular fiction' text even though it was long-listed for the prestigious Women's Prize for Fiction.
The recent study "Changing Race Boundary Perception by Reading Narrative Fiction" led by the psychology researcher Dan Johnson from Washington and Lee University took a somewhat different approach. Instead of assessing global changes in empathy, Johnson and colleagues focused on a more specific question. Could the reading of a fictional narrative change the perception of racial stereotypes?
Johnson and his colleagues chose an excerpt from the novel "Saffron Dreams" by the Pakistani-American author Shaila Abdullah. In this novel, the protagonist is a recently widowed pregnant Muslim woman Arissa whose husband Faizan was working in the World Trade Center on September 11, 2001 and killed when the building collapsed. The excerpt from the novel provided to the participants in Johnson's research study describes a scene in which Arissa is traveling alone late at night and is attacked by a group of male teenagers. The teenagers mock and threaten her with a knife because of her Muslim head-scarf (hijab), use racial and ethnic slurs as well as make references to the 9/11 attacks. The narrative excerpt does not specifically mention the word Caucasian, but one of the attackers is identified as blond and another one has a swastika tattoo. They do not believe her when she tries to explain that she was also a victim of the 9/11 attacks and instead refer to her as belonging to a "race of murderers".
The researchers used a second text in their experiment, a synopsis of the literary excerpt from Saffron Dreams. This allowed Johnson colleagues to distinguish between the effects of the literary narrative style with its inner monologue and description of emotions versus the effects of the content. Samples of the literary text and the synopsis used by the researchers can be found at the end of this article (scroll down) for those readers who would like to compare their own reactions to the two texts.
The researchers recruited 68 U.S. participants (mean age 36 years, roughly half were female, 81% Caucasian, reporting seven different religious affiliations but none of them were Muslim) and randomly assigned them to the full literary narrative group (33 participants) or the synopsis group (35 participants). After the participants read the texts, they were asked to complete a number of questions about the text and its impact on them. They were also presented with 18 male faces that the researchers had designed with a special software in a manner that they appeared ambiguous in terms of Caucasian or Arab characteristics. For example, the faces combined blue eyes with darker skin tones. The participants were asked to grade the faces as being:
2) mixed, more Arab than Caucasian
3) mixed, more Caucasian than Arab
The participants were also asked to estimate the genetic overlap between Caucasians and Arabs on a scale from 0% to 100%.
Participants in the narrative fiction group were more likely to choose one of the ambiguous options (mixed, more Arab than Caucasian or mixed, more Caucasian than Arab) and less likely to choose the categorical options (Arab or Caucasian) than those who read the synopsis. Even more interesting is the finding that the average percentage of genetic overlap between Caucasians and Arabs estimated by the synopsis group was 33%, whereas it was 57% in the narrative fiction group.
Both of these estimates are way off. The genetic overlap between any one human being and another human being on our planet is approximately 99.9%. Even much of the 0.1% variation in the human genome sequences is not due to 'racial' differences. As pointed out in a Nature Genetics article by Lynn Jorde and Stephen Wooding, approximately 90% of total genetic variation between humans would be present in a collection of individuals from any one continent (Asia, Europe or Africa). Only an additional 10% genetic variation would be found if the collection consisted of a mixture of Europeans, Asians and Africans.
It is surprising that both groups of study participants heavily underestimated the genetic overlap between Arabs and Caucasians, and that simply reading the fictional text changed their views of the human genome. This latter finding is also a red flag that informs us about the poor state of general knowledge of genetics, which appears to be so fragile that views can be swayed by nonscientific literary texts.
This study is the first to systematically test the impact of reading literary fiction on an individual's assessment of race boundaries and genetic similarity. It suggests that fiction can indeed blur the perception of race boundaries and challenge our stereotypes. The text chosen by the researchers is especially well-suited to defy stereotypical views held by the readers. The protagonist's Muslim husband was killed in the 9/11 attacks and she herself is being harassed by non-Muslim thugs. This may challenge assumptions held by some readers that only non-Muslims were the victims of the 9/11 attacks.
The effect of reading the narrative text seemed to have effects on the readers that went far beyond the content matter – the story of a Muslim woman who is showing significant courage while being threatened. The faces shown to the study participants were those of men, and the question of genetic overlap between Caucasians and Arabs was a rather abstract question which had little to do with Arissa's story. Perhaps Arissa's story had a broader effect on the readers. The study did not measure the impact of the narrative on additional stereotypes or assumptions held by the readers such as those regarding other races or sexual orientations, but this is a question that ought to be investigated.
One of the limitations of the study is that it assessed the impact of the story only at a single time-point, immediately after reading the text. Without measuring the effect a few days or weeks later, it is difficult to ascertain whether this was a lasting effect. Another limitation of this study is that it purposefully chose an anti-stereotypical text, but did not test the opposite hypothesis, that some fictional narratives may potentially foster negative stereotypes.
One of my earliest memories of an English-language novel about Muslim characters is the spy novel "The Mahdi" by the British author A.J Quinnell (pen name for Philip Nicholson) written in 1981. The basic plot is that (spoiler alert) US and British intelligence agencies want to manipulate and control the Muslim world by installing a 'Mahdi', the long-awaited spiritual and political leader of Muslims foretold by Muslim tradition. The ridiculous part of the plan is that the puppet leader is accepted by the Muslim world as the true incarnation of the Mahdi because of a green laser beam emanating from a satellite. The beam incinerates a sacrificial animal in front of a crowd of millions of Muslims at the Hajj pilgrimage and convinces them (and the rest of the Muslim world) that God sent this green laser beam as a sign. This novel portrayed Muslims as gullible idiots who would simply accept the divine nature of a green laser beam. One can only wonder what impact reading an excerpt from that novel would have had on the perception of race boundaries by study participants.
The study by Johnson and colleagues is an important contribution to the research of how reading can change our perceptions of race and possibly stereotypes in general. It shows that reading fiction can blur the perception of race boundaries, but it also raises a number of additional questions about how long this effect lasts, how pervasive it is and whether fiction might also have the opposite effect. Hopefully, these questions will be addressed in future research studies.
Image Credit: Saffron Woman by N.M. Rehman (generated from an attribution-free, public domain photograph)
Dan R. Johnson , Brandie L. Huffman & Danny M. Jasper (2014)
Changing Race Boundary Perception by Reading Narrative Fiction, Basic and Applied Social Psychology, 36:1, 83-90, DOI:10.1080/01973533.2013.856791
Excerpt of the literary fiction sample from "Saffron Dreams" by Shaila Abdullah
This is just an excerpt from the narrative sample used by the researchers, which was 3,108 words in length (pages 57-64 from the book):
"I got off the northbound No. 2 IRT and found out almost immediately that I was not alone. The late October evening inside the station felt unusually weighty on my senses.
I heard heavy breathing behind me. Angry, smoky, scared. I could tell there were several of them, probably four. Not pros, perhaps in their teens. They walked closer sometimes, and other times the heavy thud of spiked boots on concrete and clanking chains receded into the distance. They walked like boys wanting to be men. They fell short. Why was there no fear in my heart? Probably because there was no more room in my heart for terror. When horror comes face-to-face with you and causes a loved one's death, fear leaves your heart. In its place, merciful God places pain. Throbbing, pulsating, oozing pus, a wound that stays fresh and raw no matter how carefully you treat it. How can you be afraid when you have no one to be fearful for? The safety of your loved ones is what breeds fear in your heart. They are the weak links in your life. Unraveled from them, you are fearless. You can dangle by a thread, hang from the rooftop, bungee jump, skydive, walk a pole, hold your hand over the flame of a candle. Burnt, scalded, crashed, lost, dead, the only loss would be to your own self. Certain things you are not allowed to say or do. Defiant as I am, I say and do them anyway.
And so I traveled with a purse that I held protectively on one side. My hijab covered my head and body as the cool breeze threatened to unveil me. I laughed inwardly as I realized I was more afraid of losing the veil than of being mugged. The funny part of it is, I desperately wanted to lose my hijab when I came to America, but Faizan had stood in my way. For generations, women in his household had worn the veil, although none of them seemed particularly devout. It's just something that was done, no questions asked, no explanations needed. My argument was that we should try to assimilate into the new culture as much as possible, not stand out. Now that he was gone, losing the hijab meant losing a portion of our time together.
It had been just 41 days. My iddat, bereavement period, was over. Technically I was a free woman, not tied to anyone, but what could I do about the skeletons in my closet that wouldn't leave me alone?"
Excerpt of the Synopsis used by the researchers as a comparator:
This is the corresponding excerpt from the synopsis used by the researchers. The full-length synopsis was 491 words long:
"The scene starts with Arissa getting off the subway train. She is being followed. Most commuters have already returned home, so it is not the safest time to be traveling alone. Four people are walking behind her. Initially confused by the lack of fear in her heart, she realizes that it is the consequence of losing someone so close to her. It is ironic that she is wearing her hijab, a Muslim veil. She wanted to get rid of it when she came to America, but her husband, Faizon, insisted she keep it. Following his death, keeping the hijab was a way of keeping some of their time together. It has been 41 days since the attack, and Arissa's iddat, bereavement period, is over. She is a free woman, but cannot put aside her grave feelings of loss."
Monday, April 21, 2014
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
by Yohan J. John
No one knows exactly how life began, but a pivotal chapter in the story was the formation of the first single-celled organism -- the common ancestor to every living thing on the planet. I like to think of the birth of life as the creation of the first boundary -- the cell membrane. That first cell membrane enclosed a drop of the primordial soup, creating a separation between inside and outside, and between life and non-life. Through this act of individuation the cell could become a controlled environment: a chemical safe zone for the sensitive molecular machinery needed to maintain integrity and facilitate replication. The game of life consists in large part of perpetuating the difference between inside and outside for as long as possible. Death, then, is the dissolution of difference. But the paradox at the heart of life is that the inside cannot survive without the outside. The cell requires raw materials -- nutrients and energy -- to sustain itself and to reproduce, and these must be sought outside the safe zone, in the wild and unpredictable outside world.
The cell membrane has a dichotomous role. It must preserve the cell’s identity as an entity that is distinct from everything outside it, but it must not be an impenetrable wall. It must be a gateway through which the cell can absorb raw material and eject waste, but it cannot allow the inside to become inundated by the outside. It fulfills this challenge by being selectively permeable, carefully overseeing the traffic between the inside and the outside. The cell membrane must also be flexible, because it serves the roles of locomotion and consumption. In a single-celled organism, the cell membrane is therefore a primitive sense organ, a transportation system and a digestive system, all rolled into one.
The birth of life was a moment of cleaving: when the first cell membrane enveloped its drop of primordial ooze, it cleaved the inside from the outside, but it also became the conduit through which the inside could cleave to the outside. Like Janus, the two-faced Roman god of beginnings and endings, of doors and passageways, the cell membrane is a sentry looking in two directions simultaneously. Given its role in cellular transaction, transition and transformation, the cell membrane’s function might even be described as a precursor to intelligence.
The connection between boundaries and intelligence may run quite deep. In multicellular organisms like humans, the skin is the boundary between inside and outside. Skin cells, as it turns out, are related to neurons. During embryonic development, cells in the ectoderm, which is the outermost layer of the embryo, gradually differentiate to become the cells of the skin and the nervous system. (Researchers have recently found ways of turning skin cells into neurons, suggesting that the line between these two kindred cells may be somewhat permeable.) The skin of a multicellular organism is much like the cell membrane of a single cell: it separates inside from outside, providing a physical boundary for the organism. But the inkling of intelligence in that first semipermeable membrane finds its full expression in the nervous system, which patrols a very different sort of boundary: the line between predictable and unpredictable, between known and unknown.
Life is an obstacle course full of things an organism needs or desires, like food and shelter, and things it would prefer to avoid, like predators or foul weather. Maximizing the good while minimizing the bad requires being able to use patterns in the environment to anticipate what is going to happen. Plants must be sensitive to the rhythmic pattern of the seasons. Animals in turn must predict the patterns of plants and other animals. The evolution of the central nervous system -- the brain and the spinal cord -- was a great leap forward in the pattern-recognition capabilities of living things. The ability to recognize and categorize the patterns in nature and use them to survive and thrive is central to intelligence. It allows living things to find (and create) islands of order and stability in a swirling sea of change and uncertainty.
But it’s dangerous to just stay put once you’ve found an island of order. Resources are limited and change is the only constant -- the boundary between the solid ground of reliable knowledge and the encircling sea of unpredictability is in a state of flux. Nature seems to always find a way of casting us out of the gardens of Eden we create or discover . A pattern-seeker must be vigilant, staying on the lookout for unforeseen dangers and new opportunities. This vigilance takes the form of exploration, and even very simple animals do it. Insect colonies have specialized scouts that search for fresh sources of food. Introduce a new object into the cage of a lab rat, and the first thing it does is investigate it thoroughly.
We tend to describe the behavior of animals behavior in purely utilitarian terms. The exploratory behavior of rats, or birds, or bees, is just a combination of foraging for food, looking for mates, and keeping an eye out for predators. When it comes to human culture, however, utilitarianism can often seem like a bit of a stretch. Is it fear or hunger that drives people to investigate the depths of the ocean, or the far reaches of space?
We humans get bored on our islands of order, even though we need them for our survival and sanity. We also like to sail off into the unknown from time to time. What constitutes the unknown varies from person to person -- it’s not just scientists or philosophers that contend with it. Only a fraction of the world’s population has the inclination and the good fortune to experience first hand the outer limits of scientific knowledge, but a far larger number of people can contend with the boundaries of their worldviews in the domains of art and culture. The edge is where the action is -- on the beach where the chaotic sea meets the tranquil beach. But what is it that drives us to the experiential edge in the first place? And does it have anything in common with the forces that drive living things out of their comfort zones in search of sustenance?
The difference between a desire and a drive is that a desire subsides when the goal is reached, whereas a drive is independent of the attainment of the goal -- the act of striving becomes pleasurable in itself. Living beings have a variety of desires that can be temporarily satiated, but the lust for life is a drive, not a desire. In the long run life appears to revel in the very attempt to perpetuate itself. Intelligent beings, meanwhile, seem to revel in the attempt to expand their islands of order, fighting back the lapping waves of the unknown.
We have a name for the drive towards the unknown -- it’s called curiosity. Jürgen Schmidhuber, an artificial intelligence researcher, has a theory of “computational aesthetics” that offers us a vivid mathematical analogy for curiosity. The theory can be summed up in one bold assertion: that interestingness is the “first derivative” of beauty. Readers who detect a whiff of scientific imperialism will hopefully bear with me as I unpack this idea, which need not be taken as anything more that playful speculation. I admit, colloquial and intuitive concepts like “beauty” or “interestingness” often get bent out of shape a bit when scientists examine them, but this is not necessarily a bad thing. Sometimes we need to distance ourselves from our intuitions to discern their outlines more clearly.
According to Schmidhuber’s computational theory of aesthetics, the subjective beauty of a thing is defined as the minimum number of bits required to describe it. Since descriptions vary from person to person, beauty is in the eye of the beholder. A definition of beauty based on bits of information is not in itself particularly alluring, but it can be improved if we see it as an attempt to capture subjective simplicity or elegance. It is perhaps unsurprising that a scientist’s definition of beauty has much in common with Occam’s Razor. 
However, beauty is not necessarily interesting. We also seek the shock of the new, the excitement of the unusual. So Schmidhuber goes on to define interestingness as the rate of change of beauty -- the time-derivative of the subjective description length. A derivative measures the rate of change of one thing with respect to something else. The time-derivative of distance is speed (the rate at which your distance from some point changes), and the time-derivative of speed is acceleration (the rate at which your speed changes). For something to be interesting then, the observer’s ability to describe it must change with time. So interestingness is a dynamic quality, whereas a thing can be beautiful even if it never changes.
Some examples will help us understand what this means. Most people will agree that staring at a blank screen is quite a boring experience. A blank screen is extremely simple from an information-theoretic perspective, and so its description length will be very short. The description might be something like “Every pixel is black”. There is clearly a pattern, but it’s trivially simple. The information on a blank screen can be easily compressed. White noise sits at the other extreme. Somewhat counter-intuitively, information theory tells us that random noise is rich in information, so it’s description length is extremely long. Totally random information cannot be compressed. An accurate description of white noise on a screen would require specifying what is happening in each and every pixel. If a pattern is something that has structure and internal coherence, then randomness is the absence of pattern. Most people find random white noise boring too. What people find interesting lies somewhere in the middle -- between what is too easily compressed, like a blank screen, and what is totally incompressible, like white noise. We like patterns that are simple, but not too simple; complex, but not incomprehensibly so.
Schmidhuber’s theory is couched in the language of computer science and artificial intelligence, which is why the concept of data compression plays such a prominent role. We don’t really know if the brains of humans and animals compress experience in the same sense that a computer algorithm does. But we do know that living things use pattern-recognition to make useful predictions about their environments. We compare the patterns we’ve encountered in the past with our present experience, and try to anticipate the future. We categorize the patterns we encounter -- poisonous or edible, sweet or bitter, friend or foe -- so that if we encounter them again, we know how to react. Rather than compressibility per se, perhaps what we find interesting is the possibility of enhancing our categories so they encompass more of our experiences. Knowledge consists of having comprehensive categories for as many experiences as possible, and knowing how to respond to each category.
What might interestingness look like? Let me describe a toy system that is confronted by something unexpected, and shows a spurt of interest. Let’s say we have a system that is experiencing something beautiful. The subjective beauty “B” can change over time. In the diagram above, beauty is the blue line, and it stays boringly constant for a while, but at the halfway point it suddenly changes. Imagine a pleasant but predictable movie that suddenly becomes unpredictable in the middle. The beauty increases! The system has an expectation “E” which in our toy system is a memory of the past value of B. The red line in the diagram is the expectation. The green line represents the interest level “I”, which depends on the difference between the beauty and the expectation. When expectation and reality don’t line up, the value of E is different from B, so the system’s interest level shoots up. But eventually E gets accustomed to the new value of B, and the interest level goes back to zero. If the system had perfect expectations and could perfectly predict the change to the value of B, then there would be no increase in the interest level. A curious system is addicted to these bursts of interest, and actively seeks them out. 
As it turns out, the brain’s dopamine neurons fire in bursts of this sort when something unexpectedly good happens. Researchers call this a “reward prediction error” signal, and it is one of the reasons many people think of dopamine as the “pleasure chemical”. But this misses a subtlety -- if the pleasure is completely predictable, the dopamine cells don’t fire. This dopamine cell pattern is more of a novelty signal than a pleasure signal. (There seem to be several other things that dopamine does, so even calling it a novelty chemical is an oversimplification.) Neural network theorists often employ the dopamine burst as a “reinforcement signal” that allows a network to learn from experience and improve its ability to categorize and predict. 
As we simplify, expand and refine our categories we push forward the boundary between what we understand and what we still don’t quite have a handle on. We expand our islands of order, reclaiming land from the sea of unpredictability. Many of the categories humans obsess about have little or nothing to do with the struggle to survive. Curiosity pushes us to proliferate our aesthetic categories -- and in extreme cases it leads to the infinitessimal parcellations of genre and sub-genre that the internet so effectively reveals and encourages. (I invite the reader who does not know what I am talking about to examine the various sub-genres of heavy metal music.)
Curiosity is the drive towards interestingness, and it brings us to the boundaries of what we understand. A trip to a modern art museum should adequately establish that we don’t just find any baffling experience interesting. We seek experiences that are in the sweet spot -- not totally predictable and monotonous, but not random and formless either. During an interesting experience we don’t know exactly what is going on, but we get the feeling that meaningful resolution is but a few moments away. So a Hollywood blockbuster that is too formulaic and predictable is not very interesting, but an experimental art film with no formula at all can bore us to tears too. We like movies with a few twists -- but in order to recognize them as twists we have to have some expectation of what normally happens. A really interesting movie flirts with the boundary between what we know well enough to anticipate, and what surprises and confounds us.
So how does curiosity help us “compress” or improve our categories? Think of the concept of genre. In order to get a subjective sense of what a genre is, you need to experience many examples. Curiosity is what draws you towards this experience. Even if you go to Wikipedia or tvtropes.com and read up on the conventions of a given genre, you still need first-hand experience to understand how those conventions manifest themselves. You need to listen to several blues songs before you can be sure you know what the basic blueprint is. And the more you listen, the more musical structure you can perceive and predict. Once you understand the conventions -- once you know what to expect -- you can experience a burst of interestingness when someone subverts those conventions and confounds your expectation. A blues aficionado is well placed to appreciate the way a band like Led Zeppelin reinterprets the genre’s conventions. In the experience of such aesthetic subversion, you are once again confronted by what is strange and unpredictable, and the curiosity engine becomes fired up once more.
What drives people to police their subjective aesthetic boundaries so zealously? What makes people so concerned with questions of authenticity or originality in art and music? I think going back to the cell membrane might give us some ways to think about such questions. The cell membrane separates inside from outside, mediating interactions between the two. In maintaining a chemical difference between the inside and the outside, it preserves the identity of the cell as an entity that is distinct from the environment. Perhaps aesthetic boundaries -- and mental boundaries more generally -- are central to our notions of identity. To carve out a distinct identity is to maintain a difference between an in-group (which could be just one person) and an out-group. Just as the cell membrane defines the contours of the cell, artistic and intellectual boundaries may define the contours of a personality, or of a community. For people whose identities are wrapped up in difference, to merge with the mainstream might seem a kind of cultural death: a dissolution of the boundary that sustains individuality and identity.
Staying on the boundaries of what is familiar in order to find sweet spots of interestingness allows us to expand our experiential horizons and reaffirm our existences as distinct individuals. But this can also be quite a tiring experience. What is true for a cell is true for an individual, and perhaps even for a culture -- maintaining a boundary takes energy! Most of us aren’t critics -- we can’t spend all our time refining our categories of experience, or sustaining idiosyncratic differences of taste and opinion. Sometimes we need to return to our comfort zones and replenish our supplies. Visiting a museum, for instance, is an experience that can be simultaneously interesting and mind-numbing. (In this age of endless online novelty, I can’t be the only one who seeks out tried and tested experiences -- comfort food, old familiar songs, trashy television -- as an antidote to too much interestingness!) Perhaps merging with the mainstream from time to time is not such a bad thing.
Individualism is taken as a self-evident virtue in modern liberal societies. But given all the effort involved in maintaining the boundary between inside and outside, between the Self and the Other, the opposite movement can be an act of liberation: dissolving the Self by forgoing, for a time, the maintenance of difference. Consider those moments during a sporting event (like a Wave) or a musical gathering (like a Rave) when everyone is moving in unison. It seems as if there is a kind of ecstasy in this voluntary surrender of individuality and difference.
Aesthetic experience, then, is a twofold process. On the one hand, it leads us to curiosity and wonder, which draw us away from our islands of certainty, transforming the contours of our selves. On the other hand, it offers us dissolution and union, which pull us back from the margins, towards community and commonality. Perhaps the dance of aesthetic experience is a microcosm of the great dance of life -- a dance that began with the undulations of that first cell membrane. We sway in the direction of the unknown, and then drift back to the comfort of the known.
Notes and References
 The Genesis story of the fall from grace tells of how man and woman were cast out from the Garden of Eden. In The Power of Myth, Joseph Campbell interprets the story as follows: “Whenever one moves out of the transcendent, one comes into a field of opposites. One has eaten of the tree of knowledge, not only of good and evil, but of male and female, of right and wrong, of this and that, and of light and dark.” Campbell’s “field of opposites” is where pattern-recognition and categorization happen -- it is the field of boundaries and differences, and also of self-consciousness. And this field is no paradise, because it is constantly threatened by the unfamiliar and the unpredictable.
 Jürgen Schmidhuber summarises his theory of aesthetics in a paper entitled “Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes”.
 The diagram shows the results of a little simulation I coded up in Python. It’s a rudimentary “differentiator” that compares the present reality (B) with the recent past (E), and constantly updates its expectations (E). The burst of interest (I) happens during the transient period when reality exceeds expectation (when B > E). Many simple models of dopamine cells use a similar principle. Similar mechanisms can also be employed for edge-detection in a visual image, a crucial stage in object recognition. The system I demonstrate is pretty rudimentary -- it just expects the present to resemble the recent past. You could say that a major goal of artificial intelligence and computational neuroscience is to create systems that have refined, flexible expectations with which to anticipate reality.
 Perhaps the hype cycle represents a burst of curiosity at the societal level. And perhaps social media frenzies are the dopamine bursts of the internet’s hive mind?
My Genome Report Card
by Carol A. Westbrook
Less than 100,000 people in the entire world have had their genome sequenced. I am now one of them. As I wrote in 3QuarksDaily in December, I went into this with some trepidation--you never know what bad news lurks in your genome! I promised to give a report of my results, and here it is.
To get my genome sequenced, I enrolled in Illumina's "Understand Your Genome" Program. Illumina is one of the few companies licensed by the FDA to perform whole genome sequencing (WGS) for medical diagnosis--other consumer products such as Ancestry.com, National Geographic's Geno 2.0, and 23andMe, provide only a limited analysis. I sent in a blood sample in November, and in February received a detailed analysis by Illumina's genetic counselors. In March I attended the "Understand Your Genome," conference, where I received an iPad with my WGS uploaded into the "MyGenome" app, training on the use of the app, and a fascinating daylong seminar which explored the interpretation and medical uses of genome sequences. My daughter, a medical student, attended the program with me.
Viewed on the iPad, my genome sequence consists of two similar but not identical, parallel lines of the letters, one from each chromosome. There are only 4 letters, A,C,G, and T, representing the four DNA nucleotides that are aligned to make the sequence. A human sequence is about 6 billion nucleotides long, with half inherited from one parent and half from the other, and a few new mutations that arose on their own, probably less than 100. Thus, from a family perspective, a person's DNA sequence is 50% identical to each of his parents, children or siblings, 25% identical to grandparents, grandchildren, and so on to my distant relatives. My genome is very similar to every other person's, but it is not identical to anyone's. No one has ever had the same DNA as me, and never will -- it is what makes me uniquely me.
How different am I from everyone else? My genetic analysis showed that I have 3,524,186 individual nucleotide differences, from the "average" genome to which it was compared, reference genome hg19, NCBI build 37. This is about 0.05% variation, which is typical for most people. To put this in perspective, if you were to compare my DNA to that of our two most closely-related primate species, bonobos and chimpanzees, the differences would be over 4%; when comparing me to Neanderthal man, however, you would find only 0.3% variation. So 0.05% is small enough to make me human, but large enough to make me a unique individual.
Of the 3 million variants in my genome, only about 13,000 were found that produce change in the protein coding sequences of genes, impacting on 1,222 "conditions" (diseases or traits). The great majority of these changes were considered to be "benign," meaning they been validated not to cause disease, or they were "variants of unknown significance, " or VUS. A VUS have not been linked to disease, but it has not been excluded, either; many of these VUS's will become clear as more genomes are sequenced and the database expands. We are not sure what to make of the other 3,511,186 variants that occur outside of genes--some may be significant but most are probably silent passengers that were picked up during evolution. Again, we'll learn more as the database expands.
Of the1,222 conditions for which I have variants, only 4 are significant. Three are genes for recessive diseases, which makes me only a carrier, since you need two copies to have a recessive disease. Two these genes, galactosemia and Bardet-Biedl syndrome, are very rare debilitating diseases of children. My own children have a 50% risk of being carriers, though it is very unlikely that their partners are carriers too, so there is little risk that their future children will have the disease. They could be tested prior to having my grandchildren. The third recessive gene is hemochromatosis, a disease of iron overload, which is easily treated in its early, silent stages, but can cause liver cirrhosis if it is not. The hemochromatosis gene is quite common, as one in 200 people of European background are carriers. In fact, it is possible that some of my relatives may actually have the disease; fortunately for them, hemochromatosis is easily diagnosed with a blood test for ferritin, or an inexpensive DNA test.
My surprising result was that I have both recessive genes for TPMT deficiency which, strictly speaking, is not a disease but is a metabolic variation in drug metabolism. A deficiency in TPMT or "thiopurine S-methyltransferase" makes me unable to metabolize three medications: 6-mercaptopurine, 6-thioguanine, and azathioprine. If I take one of these medications I would get deathly ill; fortunately, these are drugs only used for leukemia treatment or transplant. I will keep this in mind should I ever need them. About 0.3% of the population also has TPMT deficiency.
Now on to the diseases which develop later in life, what I call the "AARP diseases." Many participants opted out of learning whether they have one of these scary genes, but I had already decided that I wanted everything revealed. For cancer risk, I was pleased to find that I don't carry any of the known genes. I was also relieved to find that I don't carry any of the known genes for neurologic conditions, in particular the genes for Parkinson's disease, which affected my late mother when she was in her 80's. I also do not carry the genes for early-onset Alzheimer's dementia. Illumina does not analyze for late-onset Alzheimer's dementia, which is the more common form that attacks older adults, though we were given the coordinates if we wanted to check on our own. To do this I used the MyGenome app and punched in the WGS location. I found that I have one copy of APOE-4 -- increased risk -- and one copy of APOE-2-- protective. My risk, then, is neutral. Whew! Looks like I lucked out in the AARP diseases.
That, in a nutshell, is my genome report. Was it valuable? Absolutely. The value to me was not in learning what I have, but what I don't have. I was reassured that I am reasonably healthy, and likely to be so for a few more years. I don't have an increased cancer risk, I don't have a tendency to blood clots. Except for TPMT deficiency I don't have any drug metabolic variants, which means my risk of unexpected side effects from medication is low. My health care costs are likely to remain lower than average and I will probably go on being healthy for a long time. These conclusions will influence both my health insurance choices and my financial planning for retirement.
You can begin to see the impact that WGS might have on your own health, as well as on your health care costs. Today there are only two medical uses for WGS that are accepted and reimbursed by insurance: the identification of unknown diseases of children, and cancer genome analysis for chemotherapy targets--and the cancer use is still not widely accepted. But there are many more ways we could improve medical care with WGS. Imagine the complications and deaths that would be avoided, and the wasted health dollars that would be saved, if your pharmacy had a list of your drug metabolism variants so they could identify--in advance--if you are likely to have serious side effects, or if a particular drug won't be effective for you. We could actually do this today! And if a person knew in advance he had a tendency to some diseases and not others, he could focus his health care dollars on screening and prevention strategies where they will have the most impact. This will be even more relevant as our knowledge base expands.
I cannot recommend WGS to everyone -- yet--but it's in our future, especially as the price is expected to drop below the $1000 mark, less than the cost of a single CAT scan. At present, too few genomes have been sequenced and correlated with medical information to be able to interpret much of what is present in a WGS. This will change over the next few years. There are projects throughout the globe that are doing just this, such as the 100,000 Genomes Project in the UK and The Million Human Genomes Project in China. In the US, the Personal Genome Project is collecting sequences such as mine to do these studies. The potential impact of WGS technology is enormous, as it will lead to more effective, personalized treatment of disease and, more importantly, to better health.
At some time in the not-too-distant future, everyone will have his or her own WGS. I'm pleased to be an early adopter.
Monday, March 31, 2014
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88
Monday, March 03, 2014
Pale Terraqueous Globes
by Alexander Bastidas Fry
Imagine the closest star beyond the Sun has a planet orbiting it about the size of Earth. Visualize what your sunset would look like on this distant planet. Perhaps there would be two stars at the center of this solar system. Your sunset would be breathtaking. You could even visualize what the Sun would look like from this planet – just another unassuming star in the sky. You don't have to merely imagine that such a planet might exist. A planet like this really does exist – of course you'd still have to imagine the part where you are on the surface of this world. The Alpha Centauri star system, which is essentially a triple star system of Alpha Centauri A, Alpha Centauri B, and Proxima Centauri has just such a planet. There is a planet in the sky waiting for us at a distance that is just two hundred and seventy thousand times further than the Earth is from the Sun. This planet is near 1500 degrees on the surface, so we wouldn't want to be there, but nonetheless the fact is that astronomers are finding similar planets commonly. There may be a planet just the size of Earth at a nice temperature quite near us galactic speaking. We are searching.
Most planets don't seem to be much like Earth. In fact so far we haven't found a single planet that has a temperature and size similar to Earth, but part of the problem with finding planets is that finding big giant planets – like Jupiter is easy – while small rocky planets like Earth are elusive. But we are on the edge of discovery. All in all Earth-like planets likely abound. In fact with 95% confidence there is an Earth size planet in the habitable zone of a small star within 23 light years of us. The habitable zone is the place where a planet would not be too hot or too cold. A place where a planet wouldn't see its oceans boiled off or frozen into desolate ice tundra. Habitable planets are common in our galaxy and by galactic standards not very far apart. On average Earth-like planets are only 13 light-years apart.
Just a few years ago we knew very little about the characteristics or numbers of planets beyond our solar system—the unknown extrasolar planets. Today we know that most stars are host to at least one planet. This is revolutionary: not only do other stars with other planets exist, they are downright common. This new information was harvested by the Kepler space telescope. It systematically surveyed 145,000 stars in the direction of constellation Cygnus for the past four years. This careful survey allows us statistically extrapolate the occurrence of planets for each of the hundreds of billions of other stars in the Milky Way. We have observed on average that each star in the Milky Way has more than one planet. There are several ways to detect or infer the presence of extrasolar planets. The most common and useful methods to date for detecting stars are the radial velocity detection and transit detection.
The radial velocity method of detecting stars relies upon Newton's Third Law of motion: every force has an equal and opposite force. So as the Earth, or any planet, swings around a star the star also swings around the given planet. If that movement is radial (parallel) with our line of sight then we can observe the precise variation in wavelength of light emitted by the star (the Doppler shift) over time to infer the existence of a massive object, like a planet, orbiting that star. Stars moving towards or away from us at speeds of as little as 1 meter per second can be found using the radial velocity technique.
The transit method of planet detection is looking for minor eclipses. We detect planets by watching them eclipse their host star, or transiting across the face of the host star. Such eclipses are far from total and are exceptionally hard to notice. This is the method that the Kepler telescope utilizes. If a star's light dims or brightness we make take notice, but it could be a stellar flare, a binary star companion, noise in the data, or a myriad of other effects. But if the star dims by the same amount over a repeated period then we can take this evidence to deduce a planet may be orbiting that star.
On February 26, 2014 the Kepler Satellite science team revealed observations of 715 new planets based on data that had been taken over the last four years. Currently the Kepler satellite is in a bit of a bind. The reaction wheels that allow it to precisely orient itself in space have failed. So these new planets coming in are just analysis of data on hand and there is no fix for Kepler in sight. The next frontier in extrasolar planet detection relies upon a technique that has not been possible before: astrometry or the precise measurement of the position of stars. Stars move when tugged upon by planets that may be orbiting it (this is the same as in radial velocity method, but here the movement is in the plane of the sky). Less than a month ago the Gaia space telescope settled into its orbit where it will soon begin to observe and pinpoint the position of nearly a billion stars. We don't expect that most of these stars will have a detectable planet (yes they may have planets, but not detectably so), but we expect to find some strange worlds for sure.
When we think about other planets we might have to change our expectations somewhat. Most stars are slightly less massive and cooler than our sun; thus for the planet to be of the same surface temperature as that of Earth the planet would need to orbit closer. It would be a shorter year. And of course unlike science fiction it isn't likely a planet would have an atmosphere that would be comfortable to Earth accustomed organisms. The Earth's atmosphere is generated by the collective activity of trees and green algae in the ocean. Ultimately researchers have urged us to consider that planets are so unique that habitability should be evaluated on a case by case basis. In fact, even the Earth's long-term habitability is in question. The Earth exists precariously close the inner edge of the habitable zone, where it might one day be too warm for comfort. In fact in billions of years the Sun will most certainly heat up and expand to the point that Earth will be a poor place to look for life.
Philosophically it feels as if the existence of other Earth-like planets is monumental. Yet given the distances to these objects it is hard to fathom any tangible consequence for generations to come. François-Marie Arouet, better known by his pen name Voltaire, was a natural philosopher who was one of the first people to consider with objective reason what other planets would perhaps be like. Voltaire's style was a mix of story and inquiry as was common at the time. One particular short story Voltaire wrote alluded to the myriad of planets that he speculated about may exist. The story is Memnon, the Philosopher of Human Wisdom Memnon, the Philosopher of Human Wisdom. It tells the story Memnon who decides to become a philosopher one day and upon that same day he loses his eye, his health, his fortune, and his reason. He passes into sleep in despair at the end of the day and is visited by a celestial spirit in a dream. The spirit says that things could be worse, in fact the spirit states that there are a hundred thousand million worlds and in each world there are degrees of philosophy and enjoyment, but each world has less than the next; there is a world of perfect philosophy and enjoyment somewhere the spirit implies, "There is a world indeed where all [perfection] is possible; but, in the hundred thousand millions of worlds dispersed over the regions of space, everything goes on by degrees. There is less philosophy, and less enjoyment on the second than in the first, less in the third than in the second, and so forth till the last in the scale, where all are completely fools." Memnon is afraid that the Earth must be on the low end of the list and replies, "our little terraqueous globe here is the madhouse of those hundred thousand millions of worlds," a statement which predates and echoes Carl Sagan's sentiments on our pale blue dot.
Voltaire seems to have identified something fundamental about the existence of other planets: there presence is not enough. For example most rocky planets we find lack any atmosphere we would find acceptable. There is no reason to think that other planets have atmospheres that humans could survive in, in fact oxygen is in a non-equilibrium state on most planets due to surface geological activity. Even if a planet has an atmosphere it may quickly fade unless the right conditions on the planet are maintained. Our oxygen rich atmosphere was primordially generated by the collective action of Cyanobacteria in the oceans some 2.4 billion years ago. This great oxygenation event effectively poisoned previous incarnations of life on the planet, but gave rise to rapidly respiring, and thinking, creatures we know today. Perhaps cyanobacteria could be seeded into the oceans of barren planets in the habitable zone and in time the atmosphere of that planet would have enough oxygen for us to breathe easy. Or maybe we could find a planet with oxygen in the atmosphere already. If astronomers ever detect the spectral signature of oxygen on an exoplanet we could optimistically infer there are respiring plants, and maybe even creatures, on the planet. Such a detection would be carried out in a similar way in which we detect the spectral signature of elements in distant stars, but because planets are so dim it may take a truly monumental telescope of perhaps one hundred meters in diameter to achieve such a detection. Even if there was an atmosphere there are still issues of temperature, seasonal variation, geological activity, natural resources, weather, and perhaps even conflict with the natural residents of the planet. The moral implications of visiting a thriving planet give rise to comparisons to colonization.
We don't have to pretend there are planets beyond Earth to visit anymore. We live in a universe, or at least a galaxy, that has given us an embarrassment of richness in planetary diversity, however there is no guarantee that any of the planets beyond Earth are better than Earth.
Monday, February 24, 2014
Does Beer Cause Cancer?
by Carol A. Westbrook
I have been taken to task by several of my readers for promoting beer drinking. "How can you, a cancer doctor, advocate drinking beer, " I was asked, "when it is KNOWN to cause cancer?" I realized that it was time to set the facts straight. Is moderate beer drinking good for your health, as I have always maintained, or does it cause cancer?
Recently there has been some discussion in the popular press about studies showing a possible link between alcohol and cancer. As a matter of fact, reports linking foods to cancer causation (or prevention) are relatively common. I generally ignore these press releases because they generate a lot of hype but are usually based on single studies that, on follow-up, turn out to have flaws or cannot be confirmed; the negative follow-up study rarely receives any publicity. Moreover, there are often other studies published at other times showing completely contradictory results; for example, that red wine both prevents and causes cancer.
Furthermore, there is a great deal of self-righteousness about certain foods, and this attitude can cloud objectivity and lead to bias in interpreting the results; often these feelings have strong political implications as well. Some politically charged dietary issues include: vegetarianism; genetically modified crops; artificial sweeteners; sugared soft drinks. Alcohol fits right into this category--remember, we are the country that adopted prohibition for 13 years. There is no doubt the United States has significant public health issues related to alcohol use, including alcohol-related auto accidents, underage drinking, and alcoholism, and the consequent problems of unemployment, cirrhosis of the liver, brain and neurologic problems, and fetal alcohol syndrome. Wouldn't it be great if the government could mandate a label on every beer can stating, "consumption of alcohol can cause cancer and should be avoided"? Wouldn't that be a wonderful "I told you so!" for the alcohol nay-sayers?
Before going further, I will acknowledge that are alcohol-related cancers. As a specialist I am well aware that cancers of the head and neck area, the larynx (voice box) and the esophagus are frequently seen in heavy drinkers, almost always in association with cigarette smoking. Liver cancer is seen primarily in people with cirrhosis--also a result of heavy drinking. In both instances, the more alcohol that is consumed, the greater the risk of developing one of these cancers--and I have rarely seen these cancers in non-smokers or non-drinkers. But assuming that my readers are not alcoholics, the question that they are really asking is whether or not they are going to get cancer from low to moderate beer drinking.
So what, then, are the facts? Does beer cause cancer? This is a much more difficult question to answer than most people realize, and can easily be the subject of years of study for a PhD dissertation (and probably has been). Researchers will be quick to admit how difficult it is to do scientifically rigorous studies on the health effects of individual dietary components. You can't just take a group of thirty year-olds, split them into two groups, give beer to one group and make the other abstain, watch them for 20 years and see who gets more cancer. So we have to rely on population studies, estimating alcohol consumption based on purchasing statistics, self-reporting of drinking (which is often unreliable), surveys, and death certificates for cancer. Incidentally, beer is not considered separately from other alcoholic beverages in any of these studies.
For example, an interesting study by Holahan and colleagues, published in 2010 in the journal Alcoholism: Clinical and Experimental Research, followed 1,824 middle-aged men and women (ages 55–65) over 20 years and found that moderate drinkers lived longer than did both heavy drinkers and teetotalers. In particular, their data suggested that non-drinkers had a 50% higher death rate than moderate drinkers (1 - 2 drinks per day). Others have criticized this conclusion because the no-alcohol group included people who didn't drink because they were already at a higher risk of death for other reasons such as serious medical conditions, previous cancers, or they were former alcoholics on the wagon. The authors claimed that they controlled for these variables but that is almost impossible to do, and that is one of the reasons that it is difficult to get accurate data from this kind of study. So it may be hard to conclude that moderate drinking significantly increases your lifespan, but it certainly doesn't shorten it.
What about cancer? The publication that started the most recent hype about cancer and alcohol appeared in the April 2013 issue ofThe American Journal of Public Health, and was written by David Nelson MD, MPH and his colleagues. They combined information from others' publications with epidemiologic surveys to determine the number of cancer deaths attributable to alcohol, as well as the types of cancer that were associated. They found that about 3% of all cancer deaths in the US were related to alcohol consumption, with most of it seen in the head and neck, larynx and esophagus. There was still a slight increased risk at low alcohol use (greater than 0 but less than 1 1/2 drinks per day), which led them to conclude, "regular alcohol use at low consumption levels is also associated with increased cancer risk." I looked at their study, and couldn't argue with their conclusion, but I don't think the risk is significant enough to recommend becoming a teetotaler.
Neither does the US National Cancer Institute (NCI). Heavy drinking aside, the NCI does not recommend that people discontinue low or moderate drinking since it would have only a minimal impact on their chance of developing cancer. Some caution is indicated for specific cancers: There is a 1.5 times increased risk of breast cancer in women who drink more than 3 drinks per day compared to non-drinkers; similarly, the risk of colon cancer is 1.5 times increased in people who more than 3.5 drinks per day. Incidentally, 3.5 drinks per day is still well above the level that is considered "low to moderate" drinking, which is usually defined as no more than 1 drink per day for a woman, 2 per day for a man. That being said, lowering your alcohol consumption deserves some consideration if you are anxious to change your odds for these two specific cancers. Nonetheless, the risks from alcohol are still low when compared to the impact of other lifestyle factors. Addressing these factors will have a much greater impact than giving up that beer or wine with your dinner: don't smoke, lose weight if you are over; exercise; eat a high-fiber diet; increase your vegetable and fruit consumption, while limiting red meat; avoid processed food; follow-up on your doctor's cancer screening recommendations for colonoscopy, pap smears, mammography and prostate screening.
Do the positive effects of drinking beer outweigh the negative effects? Moderate alcohol consumption has been reported to lower the risks of heart disease, stroke, hypertension and Type 2 diabetes; for men, it may lower the risk of kidney stones and of prostate cancer; may improve bone health; may prevent brain function decline. Alcohol consumption actually lowers the risk of kidney cancer and of lymphoma. Overall, in most studies, the positive effect was very small, but the beneficial effects of beer are only in moderate drinking, not for those who drink to excess. And of course, there are social and psychological benefits to sharing a beer with friends.
So, is beer drinking good for you? Or bad? Are you healthier if you drink, say, a beer or two per day, or are you worse off? My conclusion as a medical specialist is: it depends. On average, for the general population, drinking a little alcohol is better than abstaining completely. But on an individual basis, it depends on your current health conditions and your risk factors. Are you more likely to die of heart disease or of colon cancer? And if you want to cut down your risk of either condition you must be sure to avoid cigarettes, keep your weight down, exercise, eat a high-fiber diet that is low in red meat and processed foods, and increase your fruit and vegetable intake. The impact of alcohol consumption is likely to be small compared to these lifestyle changes.
What does the Beer Doctor do? As a cancer specialist, my lifestyle includes all of the above recommendations on exercise, weight and diet. I continue to enjoy my beer, but I keep my consumption within the low to moderate range, that is on average about 0.5 to 1 per day, and not every day. For me, the health benefits of drinking beer outweigh the negatives. To your health!
© 2014, Carol Westbrook. This article is from my forthcoming book, To Your Health! The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
Monday, January 06, 2014
Synthetic Biology: Engineering Life To Examine It
by Jalees Rehman
Two scientific papers that were published in the journal Nature in the year 2000 marked the beginning of engineering biological circuits in cells. The paper "Construction of a genetic toggle switch in Escherichia coli" by Timothy Gardner, Charles Cantor and James Collins created a genetic toggle switch by simultaneously introducing an artificial DNA plasmid into a bacterial cell. This DNA plasmid contained two promoters (DNA sequences which regulate the expression of genes) and two repressors (genes that encode for proteins which suppress the expression of genes) as well as a gene encoding for green fluorescent protein that served as a read-out for the system. The repressors used were sensitive to either selected chemicals or temperature. In one of the experiments, the system was turned ON by adding the chemical IPTG (a modified sugar) and nearly all the cells became green fluorescent within five to six hours. Upon raising the temperature to activate the temperature-sensitive repressor, the cells began losing their green fluorescence within an hour and returned to the OFF state. Many labs had used chemical or temperature switches to turn gene expression on in the past, but this paper was the first to assemble multiple genes together and construct a switch which allowed switching cells back and forth between stable ON and OFF states.
The same issue of Nature contained a second land-mark paper which also described the engineering of gene circuits. The researchers Michael Elowitz and Stanislas Leibler describe the generation of an engineered gene oscillator in their article "A synthetic oscillatory network of transcriptional regulators". By introducing three repressor genes which constituted a negative feedback loop and a green fluorescent protein as a marker of the oscillation, the researchers created a molecular clock in bacteria with an oscillation period of roughly 150 minutes. The genes and proteins encoded by the genes were not part of any natural biological clock and none of them would have oscillated if they had been introduced into the bacteria on their own. The beauty of the design lay in the combination of three serially repressing genes and the periodicity of this engineered clock reflected the half-life of the protein encoded by each gene as well as the time it took for the protein to act on the subsequent member of the gene loop.
Both papers described the introduction of plasmids encoding for multiple genes into bacteria but this itself was not novel. In fact, this has been a routine practice since the 1970s for many molecular biology laboratories. The panache of the work lay in the construction of functional biological modules consisting of multiple genes which interacted with each other in a controlled and predictable manner. Since the publication of these two articles, hundreds of scientific papers have been published which describe even more intricate engineered gene circuits. These newer studies take advantage of the large number of molecular tools that have become available to query the genome as well as newer DNA plasmids which encode for novel biosensors and regulators.
Synthetic biology is an area of science devoted to engineering novel biological circuits, devices, systems, genomes or even whole organisms. This rather broad description of what "synthetic biology" encompasses reflects the multidisciplinary nature of this field which integrates ideas derived from biology, engineering, chemistry and mathematical modeling as well as a vast arsenal of experimental tools developed in each of these disciplines. Specific examples of "synthetic biology" include the engineering of microbial organisms that are able to mass produce fuels or other valuable raw materials, synthesizing large chunks of DNA to replace whole chromosomes or even the complete genome in certain cells, assembling synthetic cells or introducing groups of genes into cells so that these genes can form functional circuits by interacting with each other. Synthesis in the context of synthetic biology can signify the engineering of artificial genes or biological systems that do not exist in nature (i.e. synthetic = artificial or unnatural), but synthesis can also stand for integration and composition, a meaning which is closer to the Greek origin of the word. It is this latter aspect of synthetic biology which makes it an attractive area for basic scientists who are trying to understand the complexity of biological organisms. Instead of the traditional molecular biology focus on studying just one single gene and its function, synthetic biology is engineering biological composites that consist of multiple genes and regulatory elements of each gene. This enables scientists to interrogate the interactions of these genes, their regulatory elements and the proteins encoded by the genes with each other. Synthesis serves as a path to analysis.
One goal of synthetic biologists is to create complex circuits in cells to facilitate biocomputing, building biological computers that are as powerful or even more powerful that traditional computers. While such gene circuits and cells that have been engineered have some degree of memory and computing power, they are no match for the comparatively gigantic computing power of even small digital computers. Nevertheless, we have to keep in mind that the field is very young and advances are progressing at a rapid pace.
One of the major recent advances in synthetic biology occurred in 2013 when an MIT research team led by Rahul Sarpeshkar and Timothy Lu at MIT created analog computing circuits in cells. Most synthetic biology groups that engineer gene circuits in cells to create biological computers have taken their cues from contemporary computer technology. Nearly all of the computers we use are digital computers, which process data using discrete values such as 0's and 1's. Analog data processing on the other hand uses a continuous range of values instead of 0's and 1's. Digital computers have supplanted analog computing in nearly all areas of life because they are easy to program, highly efficient and process analog signals by converting them into digital data. Nature, on the other hand, processes data and information using both analog and digital approaches. Some biological states are indeed discrete, such as heart cells which are electrically depolarized and then repolarized in periodical intervals in order to keep the heart beating. Such discrete states of cells (polarized / depolarized) can be modeled using the ON and OFF states in the biological circuit described earlier. However, many biological processes, such as inflammation, occur on a continuous scale. Cells do not just exist in uninflamed and inflamed states; instead there is a continuum of inflammation from minimal inflammatory activation of cells to massive inflammation. Environmental signals that are critical for cell behavior such as temperature, tension or shear stress occur on a continuous scale and there is little evidence to indicate that cells convert these analog signals into digital data.
Most of the attempts to create synthetic gene circuits and study information processing in cells have been based on a digital computing paradigm. Sarpeshkar and Lu instead wondered whether one could construct analog computation circuits and take advantage of the analog information processing systems that may be intrinsic to cells. The researchers created an analog synthetic gene circuit using only three proteins that regulate gene expression and the fluorescent protein mCherry as a read-out. This synthetic circuit was able to perform additions or ratiometric calculations in which the cumulative fluorescence of the mCherry was either the sum or the ratio of selected chemical input concentrations. Constructing a digital circuit with similar computational power would have required a much larger number of components.
The design of analog gene circuits represents a major turning point in synthetic biology and will likely spark a wave of new research which combines analog and digital computing when trying to engineer biological computers. In our day-to-day lives, analog computers have become more-or-less obsolete. However, the recent call for unconventional computing research by the US Defense Advanced Research Projects Agency (DARPA) is seen by some as one indicator of a possible paradigm shift towards re-examining the value of analog computing. If other synthetic biology groups can replicate the work of Sarpeshkar and Lu and construct even more powerful analog or analog-digital hybrid circuits, then the renaissance of analog computing could be driven by biology. It is difficult to make any predictions regarding the construction of biological computing machines which rival or surpass the computing power of contemporary digital computers. What we can say is that synthetic biology is becoming one of the most exciting areas of research that will provide amazing insights into the complexity of biological systems and may provide a path to revolutionize biotechnology.Daniel R, Rubens JR, Sarpeshkar R, & Lu TK (2013). Synthetic analog computation in living cells. Nature, 497 (7451), 619-23 PMID: 23676681
Monday, December 30, 2013
My New Year's Resolution: Getting to Know my Genome Sequence
by Carol A. Westbrook
On November 12, 2013, I placed a package containing a small sample of my blood into a UPS drop box. It is a fait accompli. I'm going to get my Genome Sequenced! I was thrilled!
No doubt you are wondering why I wanted to do this. The short answer -- because I can.
When I started my research career in the early 1980's, scientists such as myself understood how valuable the human DNA sequence would be to medical research, but it seemed an unattainable dream. Yet in 1988 the Human Genome Program was begun, proposing obtain this sequence within 20 years. I was hooked. I was active in the Program, on advisory panels, on grant reviews, and on my own research, mapping cancer genes. Obtaining DNA sequence was painstakingly difficult, while interpreting and searching the resulting sequence was almost beyond the capability of the computers of the time. Nonetheless, in 2003, a composite DNA sequence of the human genome was completed, 5 years ahead of schedule. Shortly thereafter, two of the leading genome researchers, J. Craig Venter and James Watson, volunteered to have their own genome sequenced in their research labs, and Steve Jobs purportedly had his sequenced for $100,000.
I never imagined that in 2013, only 10 years later, sequencing and computational technology would improve so much so that an individual's genome could be sequenced quickly and (relatively) affordably. I could have my own genome sequenced! For a genomic scientist like myself, this was the equivalent of going to the moon.
I found a company, Illumina, which offered whole genome sequencing for medical diagnosis. I wrote to Illumina, "I have had over 25 years of experience in the Human Genome Program, and at this time would like to truly explore what I contributed to, these many years. I think the time is right to do this. I am able to interpret the results based on my previous experience in this field, and am comfortable with any results that might be found. So is my family. Realistically, I am 63 years old and healthy, so my risk of discovering a dangerous genetic condition is minimal."
Illumina invited me to participate in their "Understand Your Genome Program," where I and about 50 others "sequencees" would have our DNA sequenced and attend a daylong seminar on the interpretation and significance of our individual results. We would receive our personal sequence on an iPad at the seminar. This program is a combination of education, publicity, and "getting the message out," and the sequencing is offered at half the commercial cost--and within my budget. So I submitted my credit card info and sent in my sample on November 12, 2013, 10 years and 7 months after the completion of the first human genome was announced.
I hadn't really thought much about the implications of knowing my personal genome sequence until that morning, when filled out the required paperwork to accompany with my sample. A doctor's signature was required to order the test -- no problem, I'm an MD -- and there was an optional signature for genetic counseling -- I signed that, too, since I have clinical experience in that area. Next, my personal medical history: a checklist of common conditions that might have a genetic link (e.g. asthma, blood clots), and whether or not I was adopted. That was easy, I'm pretty healthy and I'm not adopted.
The family history took longer because my father and mother came from large families, 12 and 5 siblings, respectively, and I have 3 sibs of my own. Heart disease, high cholesterol and strokes run rampant in my dad's family. But I never really noted that there was cancer on my mother's side and I, too, might carry a predisposition, too. And Mom did develop Parkinson's disease, and eventually non-Alzheimer's dementia. Hmm. That was something to think about. Did I want to know?
Finally, the informed consent. I signed a statement agreeing to go ahead with the test, and acknowledging that I understand the implications and/or will discuss them with my doctor. I agreed to let them keep my leftover specimen for research. I was also asked to indicate whether there were any categories of genetic diseases that I might find that I did not want to know about, such as those that can't be treated, or progressive neurologic conditions like Huntington's disease, or genes that put me at risk for cancer. I decided that I wanted everything revealed. I signed the forms and sent in the sample.
The next step was to talk to my children and siblings (2 brothers and a sister) about my pending genome sequence, reminding that that they each have a 50-50 chance of carrying any gene that I have. I offered to let them to know my results, or to opt out for some or all of the genes, as I was asked to do. Everyone was okay with this because they knew I was healthy, I was past the age for many genetic conditions, and I didn't have cancer. My son jokingly said "sure, but don't tell me if I have Huntington's disease."
Although I'm certain I don't have Huntington’s disease, I might still carry gene that puts me at risk of a disease, such as cancer or diabetes, but never develop the disease. Geneticists call this "low penetrance." My children my get the gene and the disease. I might also carry a single gene for a recessive condition, such as hemochromatosis, which causes disease only if you inherit two abnormal genes. Who knows what is in the half of my parent's genome that I didn't inherit but my siblings may have? Or in my children’s' father's DNA? Finally, there are X-linked genes, in which women carry the gene and pass it on to their daughters, but only sons and grandsons develop the disease. Some examples are color blindness and hemophilia. Clearly there are results of my genome sequence that may impact my relatives. I decided to bring my daughter along with me to the March reveal, and to bring her iPad along.
At this time, my DNA is going through the sequencer and the results are being uploaded to the iPad. I am curious to know what I will find. There may be some data on ethnic origins, which may be helpful in understanding my heritage, as my father's father was illegitimately conceived shortly before his mother emigrated from Poland. Was my great grandfather Polish, or will we find genes from some far-away place? Of course, it will also be fun to know what percent of my genome is Neanderthal, too. And from the health perspective, I will be screened for the "known" genetic conditions, such as those revealed by the less inexpensive, more-limited chip-based DNA tests, such as 23andMe. This is valuable information, particularly as it might identify risk factors (cardiac, cancer, diabetes, etc.) or unexpected interactions with medications.
Learning about the known genes is useful but, in most cases, it is not going to make a major impact on a person's health. And, considering the expense, it is certainly not going to justify implementing whole genome sequencing as a standard part of our medical care -- at least not for now. But most of the current discussion on the benefits of genome sequencing has been one-dimensional, focusing on the significance of identifying these known genes and risk factors. Yet what excites me about this project is not the known genes, but the incredible potential of a person's DNA sequence having a major impact on his health and longevity in the future, in ways that we cannot even predict.
Consider, for example, common conditions that clearly have a genetic component but can't be pinned to a single gene. These include asthma, rheumatoid arthritis, lupus, cancer, hypertension, diabetes, obesity, metabolic syndrome, kidney failure, anemia, cancer, depression, schizophrenia, obesity, heart attacks, osteoarthritis and many others. In fact, conditions like these probably cover the majority of all doctors' visits (excluding infection and accidents). These are the "unknown unknowns," where combinations of genes and environmental factors come to play. Perhaps we will be able to use our genome sequence to prevent these diseases by targeting the mutation, or lessen the severity of the condition, or modify outside factors that impact them.
-What if you knew that would get diabetes if you were overweight, but you also knew that you could prevent this obesity by modifying a gene in your liver?
-What if you knew that your daughter had the potential to be a math genius? Would you help her develop her potential?
-What if your doctor could treat your hypertension with an individualized combination of drugs that had no side effects for you?
-What if you learned you have a risk of schizophrenia, but could prevent it by a treatment designed target the DNA sequence and stop its progression?
-What if you knew what biochemical subtype of depression you had, so you could treat it with the correct drug?
Impossible dreams? Sure, but so was obtaining the complete human genome sequence 1988. There is no question that genome research is moving so rapidly that we don't even have a vision of where it will be in 10 years. But I'm confident that the medical implications will strengthen as research continues and more complete genomes are compiled. I am pleased to be an early contributor. I will have my iPad at the ready when some of these new discoveries are made.
Monday, November 25, 2013
Through A Printer Darkly
by James McGirk
James McGirk works as a literary journalist and is a contributing analyst to an online think tank. The following is an imagined itinerary for a tourist vacation twenty years in the future.
Seven days in the PRINTERZONE
June 20, 2033-June 28, 2033
A quick suborbital hop to Iceland courtesy of Virgin Galactic and then it’s all aboard the ScholarShip, a luxurious three-mast schooner powered by that most ecologically palatable of sources: the wind.
Weather-permitting you and twenty of your fellow alumni will set sail for the Printerzone. (The North and Norwegian Seas can be temperamental: in the event of heavy weather we revert to backup biodiesel power.) Our destination has been recognized by UNESCO as a World Heritage Site: it is both a glimpse at what our future might become should government regulation of printers come to an end, and a fantasy of life free from credit and ubiquitous surveillance. Together we’ll spend a week immersed in this unique community, on board an oilrig in international waters, using three-dimensional additive printing to meet our every need.
Joining us on this adventure will be Prof. Orianna Braum, an associate professor of Maker Culture at Stanford University; Alan Reasor, a forty-year veteran of the additive printing industry; and a young man who prefers to refer to himself by displaying a small silver plastic snowflake in his palm.
ITINERARY - DAY ONE
A colorful day spent traversing the Norwegian and North Seas… sublime marine grays and blues stirred by the bracing sea breeze. Keep your eyes peeled for pods of chirping Minke whales! Many are 100 percent natural.
Breakfast and lunch will be served onboard The ScholarShip by our chef Matthias Spork. Selections include: printed cereals and pastas, catch-of-the-day and a refreshing sorbet spatter-printed by his wife, renowned pastry chef Rebecca Spork.
Prof. Braum and Mr. Reasor will debate: Has Three-Dimensional Printing failed its Promise? Reasor will argue that in most instances economies of scale and the cost of raw materials make conventional manufacturing a more cost-effective solution than 3D printing. Prof. Braum will counter, describing industries that have been radically reshaped by printing—prosthetics and dentistry, bespoke suiting and fashion, at-home robotics and auto-repair—and suggest instead that government safety regulation and restrictive intellectual property licenses have done more to stifle innovation than costs. There will be time for questions afterwards. And then a brief demonstration of piezoelectric substrates: printed materials that respond to the human touch.
Following a hearty and delicious dinner prepared by the Sporks, we invite you for hot toddy and outdoor stargazing with our First Mate. The Arctic winds can be fierce at night, so you have the option of lighting the hearth in your cabin, and viewing a very special Skype broadcast—The Pink Printer’s Naughty Apprentice—which outlines in a most whimsical and titillating way some of the more adult uses of the three-dimensional printer.
(Please note that cabins containing occupants below the age of consent in their country of residence will not receive this broadcast.)
Drop Anchor in the Printerzone
After a hot breakfast ladled out by the Sporks, join your shipmates on deck for an approach unlike anywhere else on earth: a faint glimmer on the horizon gathers in size and sprouts shapes and colors, until the magnificent muddle that is the Printerzone fills our entire field of vision. Crumpled wrapping paper on stilts, a wag once said. Squint at this glorious mass, and beneath the colorful sprays of plastic and the pieces of flotsam and jetsam the residents have creatively incorporated into their homes, you just might make out the original concrete and steel beneath.
Your daily allowance of printer substrate will be issued to you in bulk so that you may trade it for trinkets. A rope ladder will be lowered from above. One at a time you will be hoisted to the Zone. There, our guide, the man who identifies himself with the silver snowflake (henceforth referred to as [*]) shall greet us. He is an interesting specimen. Ask of him what you will. The tour begins at The Workshop, a vast, enclosed “maker space” where P’Zoners (as they call themselves) exchange goods, plans for new designs and information. Barter your substrate for unique souvenirs. Take a class in creation. Then enjoy a sandwich lunch carefully selected by the Sporks. Food may also be bartered with the natives.
After lunch you may explore the Zone at your leisure or enjoy another spirited debate between Reasor and Braum. Printerzone: Model City or Goofy Aberration? Dinner shall be served in the Workshop, which at night transforms into The Wild Rumpus. Guests in peak physical condition may want to join the carousing. (N.B. Beware of custom-printed entheogens and other libations, which, while they may be legal in the Printerzone, are not necessarily safe.)
Fresh croissants and a mug of coffee are the perfect way to begin a crisp Printetrzone morning! Daring types may wish to join [*] and don a protective suit printed from the city’s custom printers, and sink beneath the waves for a romp on the seafloor and a look at how the city has evolved below the waterline. Printerzone’s silver suits are said to work as well in orbit as they do submerged beneath the waves. You may examine copies of a Vogue pictorial featuring the suits.
For those who prefer a more relaxed pace in the morning, there will be a bicycle tour of the Zone’s famous hydroponic orchid nursery, its orphanage and its medical clinics (notable, for, among other things, performing the first artificial face transplant). There will also be a chance to examine the city’s recycling system up close as it transforms unwanted printer output and even sewage and brine into the raw materials for printing. No stinky smells we promise!
(All printed foods served aboard the ScholarShip are guaranteed to be free from precursor materials that were made from human waste or potential allergens.)
For lunch, if you’re ready for it, be prepared to break some taboos. Guided by [*], the Sporks, rabbis, halal butchers, vegan chefs, and a number of other experts, you will be given a unique opportunity to eat—among otherwise offensive offerings—a perfect facsimile of human flesh, pork, dolphin steak, non-toxic fugu flesh, endangered sea turtle, and even taste the world’s most potent toxins in perfect moral comfort and safety. Less adventurous offerings will also be available for the squeamish.
During lunch, Braum and Reasor will sound off on the subject of: Whether Full Employment is Possible in a post-3DP World. Braum says printing in three dimensions will kill off the middlemen who camp out in many employment categories (the warehouse managers, the marketing men…); Reasor agrees, but thinks the unfettered labor will be absorbed by innovative new industries. There will be time for questions. Coffee too.
After lunch there will be a demonstration of one of the most potent technologies to emerge from three-dimensional printing: the cheap invisibility cloak. Then you will be joined by some of the city’s most outrageous tailors, haberdashers, wig makers, and costume outfitters. Design a more colorful, eccentric version of yourself and then top off your creation with a freshly printed invisibility cloak, so that you might attend the night’s festivities in absolute comfort. You need only reveal yourself to those you want to. Buffet dinner. Brandy against the chill.
(N.B. Printerzone security forces are equipped with night-vision goggles, so rest assured that you will be safe, but don’t get any antisocial ideas. There are some rules to abide by!)
Pondering the Printerzone
On our fourth day, after a healthy, all-natural breakfast lovingly prepared by the Sporks on the ScholarShip, we delve into the Printerzone’s more pensive side. [*] will lead us on a tour of the Million Memorials, the serene necropolis where the city’s mourners print chalky likenesses of friends and family they’ve lost, and missing objects and abstractions too. A quiet, haunting place. After a pleasing serenade by the P’Zone wailers, we picnic among the monuments, and hear [*]’s own story of loss—his young bride who slipped over the railing during a photo session and drowned in the ocean— and gaze at the spun plastic residue of a brief but happy relationship and afterwards, stroll back to The Workshop for a chance to barter for more amusements.
The subject of the day’s lecture (delivered, of course by Braum and Reasor) will be: Three Dimensional Printing in the Developing World. Printing won’t be the panacea we think it will because the developing world lacks the infrastructure to sustain itself; but surely the availability of items that would otherwise have been unavailable is valuable—but what about the cottage industries that would be eradicated by printing, wouldn’t that snuff out any printing-related development? Drink during the lecture if you like. Gaze longingly at potential mates if you wish to. This is a pleasure cruise.
After a brief question and answer session, a fittingly austere supper will be served, and [*] will introduce us to a non-profit initiative sponsored by the Printerzone: a crisis response team that will race to trouble spots and, without the needless hassle of lines of communication and supply, be able to provide surgical equipment, medicines and shelter at a fraction of the cost… cost? Yes, even this barter-driven economy is soliciting funds. Contribute what you will. The city’s orphans hand out orchids.
Snack before the Wild Rumpus. Serenade. Custom sex surrogates printed for an additional fee. (Please: No printing of lecturers, crewmembers, fellow travelers without their expressed permission, no skin prints using DNA within a 15 percent match of your own.)
At home in the Printerzone
Many of travelers wake on their fifth day beside a grim memory, manifest in the form of slightly abused piezoelectric plastic. You may find it cathartic to batter your unwanted surrogate to pieces, or, if you are the showy sort—enter the surrogate into the ring for gladiatorial combat. The festivities begin with a squabble between Braum and Reasor’s creations (one wonders at the tension between them), followed by a battle royal, and a moving speech by [*] about whether or not a surrogate has a soul. Each participant will be allowed to download a copy of Do Androids Dream of Electric Sheep for later review.
By now you’ve spent nearly a week looking up at the frills wrapped around the upper decks of the rig. Perhaps you’ve wondered what the lives of the residents are like beyond the Wild Rumpus or the Workshop floor. Today you’ll enjoy an intimate glance at their living quarters.
Some might find this disturbing. There are children here, you might say, how could one live like this? But they’re hardly cut off; well, maybe they are cut off from nature and history and dry land but not the ‘net. See the data goggles they wear? The tykes and pubers who strut about the Zone have come to see the boundary between what is virtual and what is not as a thing much more permeable than you or I.
Here the Internet is inside out. People print virtual things. Shudder at the home robots with their suction cup attachments. Are they vacuum cleaners or sexual abominations or both? Much of the home décor won’t make sense unless you’re jacked into the ’net. Too prone to data dropsy to peer through a lens? Ask yourself why this trip appealed to you in this first place, but fear not—there are gentle entheogens that replicate the experience of data being blazed onto your eyeballs.
Nighttime. Rumpus again. Dance and flail until you feel yourself dissolve into the communal flesh. The Sporks have taken the day off. Truth be told they’re disgusted with three-dimensional printing and what it means for their profession. Can you blame them? Who cares, you aren’t hungry. From perched up high, the Zone looks terraced and circular like a medieval etching of The Inferno. The Rumpus looks like the writhing of the damned. You think you see Braum and Reasor embrace. [*] sits beside you and tells you his given name was Virgil. Has he been drugging you?
Beyond the Printerzone
Someone wakes you up by firing a pistol in the air. That’s right, there are a lot of weapons here. This is a polite society. Ugh, the sunlight streaming into your eyes is sheer agony. Your neurons are crying out. Caffeine! Dopamine! Serotonin! You wobble out on deck. The Sporks are back. Thank God the Sporks are back. They pour you a mug of coffee. They cut you a grapefruit. Crackling bacon, the smell of bread baking.
[*] won’t look you in the eye, the sweaty creep.
Above you the colorful plastic printed houses look chintzy in the light. They hoist you up. Peek below. The ScholarShip is an oasis of sanity and earthtones. Everything else is Technicolor Burp. Can you really face another day of this? The medic gives you something for your throbbing head. A party assembles. Wrapped sandwiches for lunch and shot-glasses of Astronaut Ice Cream. A hardhat. That silver protective garb you’ll have to peel off afterwards. The place stinks of kerosene (that’s jet fuel someone will say.) There are men from NASA, and men from the Air Force, and men with helmets that look like they’re made entirely from mirrorshades. Cyclopses. You want to leave. There’s a faint but unmistakable rumble.
Reasor and Braum waddle to the front of your party. Another debate: Space Exploration is Three-Dimensional Printing’s Killer App. This time they both agree. Reasor thinks the way to reach for the stars is to print a massive cable and haul ourselves up. Braum says that’s great, but what’s better is that you can go anywhere in space and print anything you could possibly need. You can beam plans to the spaceship, plans for things that weren’t invented when the ship took off. Applause. Time for questions. Cups of coffee. Cookies.
Wonder what if printers were used to print infinite printers?
Clutch your mug. Look around. The top level is cold and metallic. Limp suits hang waiting, rows of silver helmets that look like Belgian Glass globes wink in the setting sun. Rockets: fins, nose caps, nozzles, streamlined bellies, lie, being assembled from spools of plastic. Dinner is splendid and sober. You remember little of it. There were candles. An ant walked across the table.
Tonight there is no Wild Rumpus. You sleep on the rig, beneath the stars but protected by an infinitesimal layer of plastic. A storm blows in. Electricity rips the Arctic sky. Rain pounds plastic but never touches you. You are woken by a helmeted Cyclops: “Some visitors decide never to leave,” he says, extending a gloved hand. It’s silver. “We’ll nourish you.” Behind the smooth surface you can just make out the blurry face of [*]
Wake to the smell of Sporks’ cooking. A printed snowflake has been placed beside you. Visitors may opt to extend their stay. Or leave and never, ever come back.
Monday, November 18, 2013
Homo Erectus, or I Married a Ham
by Carol A. Westbrook
My husband loves big erections. Don't get me wrong, I'm not speaking here about Viagra, I'm talking about tall towers made of metal, long wires strung high in the sky, and tall antennas protruding from car roofs. He loves anything that broadcasts or receives those elusive radio waves, the bigger the better. That is because he is a ham, also known as an amateur radio enthusiast, and all hams love antennas.
Amateur radio has been around since the early 1900's, shortly after Marconi's first transatlantic wireless transmission in 1901. Initially, radio amateurs communicated using Morse code, as did commercial radiotelegraphy, but voice transmission quickly gained in popularity. In order to broadcast on the ham radio frequencies, hams must obtain an amateur radio license from the FCC, and a unique call sign, their ham "name." Proficiency in Morse code was required in order to obtain an amateur radio license, but this requirement was finally dropped in 2003, which opened up the field to many more interested radio amateurs, my husband being one of them. As a result, the hobby is becoming popular again. There are local clubs to join, as well as national get-togethers called "hamfests" where there are lectures, demonstrations, equipment swap-meets, and licensing exams.
What do hams do? They communicate by radio. They use everything from a battery-powered hand-held transmitter to a massive collection of specialized radio equipment located in a corner of their home or garage, which they call their "ham shack." (See picture of my husband's ham shack, above, in his library). They talk to other ham radio operators, and participate in conversations that may be local or span the globe, depending on the radio wavelength, the power of their transmitter, and their antenna. And they erect large antennas, perhaps on an outside tower or the roof of their home.
Like Marconi, hams learn early on that it's relatively easy to send out a radio signal, but the distance it travels depends as much on the size and configuration of the antenna as it does on the signal strength. There is an art to constructing an antenna, and hams spend a great deal of effort on it. That is why hams are fascinated by antennas. They are the quintessential "homo erectus."
My husband's fascination was fueled by his boyhood days. In the 1950's he felt isolated from the outside world because his family's radio and TV could only receive a few stations, living as they did in an a valley surrounded by the Pocono Mountains. He learned that he could receive more stations by stringing long wires throughout the house, or on the roof -- creating his own makeshift antennas. This led to an engineering degree, an interest in telecommunications, and a ham radio license.
Our houses are festooned with antennas. We have long wires strung from roof to garage, a small tower on the hillside, four large parabolic dishes, from 6 to 11 feet in diameter, that receive signals from transmitting satellites... but that's another story. We even have a stealth antenna in our garden which, to the casual observer, appears to be just another garden ornament, nestled among the roses. (See picture) Unlike other "ham widows" I don't mind these antennas -- they are certainly conversation pieces. I do not have a ham license--I didn't past the exam, but then again I didn't study for it. But I often go along with my husband to hamfests, including the famous Dayton Hamvention, which takes place every May.
What is so appealing about ham radio? Why spend your time and money to buy archaic equipment and erect antennas and mess up your house -- when you can just call on your cell or Skype your friend? The answer is simple -- because you can. As a hobbyist, you cannot easily make a micro chip, or build a cell phone, or create your own internet, but you can assemble your own equipment and broadcast your own voice, around the world. Just like Marconi! What a high! What a sense of empowerment! And ham radio is a great hobby for youngsters who want to learn about the electrical and mechanical world, and enjoy the challenge of "getting out of the valley" using their own ingenuity and design. If you would like to learn more, contact the national association for amateur radio, the American Radio Relay League, to learn how to get involved, or visit their headquarters and museum at 225 Main Street Newington, CT 06111-1494 USA. You might get hooked, too.
Monday, October 14, 2013
The Uses and Disadvantages of History for Ecological Restoration
Context: One of the newer biological conservation strategies, ecological restoration, attempts to reverse the degradation of lands set aside for conservation purposes by reinstating, as closely as possible, the species and environmental conditions that existed before recent and large scale disturbances by human activities. A newly emerging framework within restoration ecology - the novel ecosystem paradigm - points out that with global change we are moving into an era for which there is no historical analogue. As a consequence land must be managed without excessive regard for the past which can no longer serve as our guide. This has generated a lot of controversy within the field. I was asked by Irish journalist Paddy Woodworth to speak on a panel on “The historical reference system: critical appraisal of a cornerstone concept in restoration ecology” at a conference of the Society for Ecological Restoration held in Madison Oct 6 -11th 2013. In recent articles and in his new book “Our Once and Future Planet: Restoring the World in the Climate Change Century” Woodworth had been critical of the novel ecosystem paradigm wondering if it does not undermine the case for restoration. I had not realized how controversial the topic had become. Tensions at the conference were running high, and the room in which this panel convened was over capacity with dozens turned away. What follows is the outline of my remark at this session.
On first glance the work of Friedrich Nietzsche (1844–1900), the German philosopher, might not seem especially helpful for restoration ecologists or indeed for anyone contemplating our relationship with the natural world. After all, his work supposedly challenges the foundations of Christianity and traditional morality. Nietzsche’s famous locutions concerning the “death of God” and his extensive discussions of nihilism should, however, be seen as his diagnosis rather than his cure. For Nietzsche our real cultural task is to overcome the annihilation of traditional morality, replacing it with something more life-affirming. The failure of our traditional precepts of value stem from the fact these express what Nietzsche calls the ascetic ideal. This ideal measures the appropriateness of human actions against edicts coming from beyond our natural and earth-bound life. The highest human values, as we traditionally assess them, came from a denial of our natural selves. Nature, in turn, is regarded as having no intrinsic value.
Thus Nietzsche even when he wrote in areas seemingly distant from traditional environmental concerns has useful things to say to us environmentalists. At times, in fact, his aphorisms are those of a poetic naturalist. In The Wanderer and His Shadow (1880, collected in Human, All too Human) he wrote “One has still to be as close to flowers, the grass and the butterflies as is a child, who is not so very much bigger that they are. We adults, on the other hand, have grown up high above them and have to condescend to them; I believe the grass hates us when we confess our love for it.” This is not, of course, to claim that Nietzsche is a traditional naturalist. His concerns are primarily about the thriving of human life, though in this he seems less like a traditional wilderness defender and closer to a contemporary sustainability advocate who seeks to locate a promising future for humans while simultaneously solving environmental problems.
A central device in Nietzsche’s work is a type of thought experiment about eternal recurrence of the same: the thought of a pure and perpetual restoration. An early use of the thought is in The Gay Science (1882). There he wrote: “This life, as you now live it and have lived it you will have to live once more and innumerable times again; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything small or great in your life must return to you, all in the same succession and sequence—even this spider and this moonlight between the trees and even this moment and I myself.” There are those — do you count yourself among them? — who might welcome this. For many of us, however, the prospect of the same sequence playing over and over again would crush us.
In some ways eternal return asks us how much history we can tolerate. In what circumstances does embracing the past testify to our strength: the ways we are disposed to ourselves and to life? And if we cannot take on the entire weight of history, how much of it are we prepared to take on: a little, a lot? The question of what to do with history is considered by Nietzsche in an 1874 essay entitled On the Uses and Disadvantages of History for Life: the essay from which I take my title. In it Nietzsche decries a style of knowledge acquisition for the sake of knowledge alone. This desiccated strategy ends up sapping our vital impulses. But it doesn’t have to be this way. Nietzsche, memorably, wrote that history can be related to the life of a person in three ways: “it pertains to him as a being who acts and strives, as a being who preserves and reveres, as a being who suffers and seeks deliverance.” These are Nietzsche’s “three species” of history: the monumental, the antiquarian and the critical species.
Restoration is always a game that we play with time. Ecology has a history of being overly confident about that which is genuinely perplexing to other disciplines, namely time. There is a long standing suspicion among philosophers that time, as such, is meaningless. The British philosopher John McTaggart (1866 –1925) famously pronounced the unreality of time. The argument, briefly, is that since every event is both past and future and thus there can be no coherent ordering of events. The observation that an event is not simultaneously past and future relies itself on the ordering that it is trying to explain, creating a vicious circle. Restorationists, however, have a refreshing lack of interest in abstractions such of these. We are concerned, however, with the degree to which we should incorporate the past into our plans for the future — this is the essence of debates about the use of historic reference systems.
The connection between restoration and history is obviously the case for classical restoration defined by the SER International Primer on Ecological Restoration as “the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed.” All those “re” and “de” words etymologically reveal their indebtedness to the past. The origins of the prefix “re”, for instance, refers to the original Latin, meaning ‘back’ or ‘backwards’. Ecological restorationist’s concern for the past is not, of course, necessarily about the past for for its own sake, but on behalf of a suite of reasons connected with our direct human needs as well as in discharging of our ethical obligations to the biosphere. As Dave Egan and Evelyn A Howell phrased it in The Historical Ecology Handbook: A Restorationist's Guide To Reference Ecosystems (2001): “A fundamental aspect of ecosystem restoration is learning how to rediscover the past and bring it forward into the present – to determine what needs to be restored, why it was lost, and how to make it live again.” In William Jordan III’s strict definition of “ecocentric restoration” — “restoration focused on the literal re-creation of previously existing ecosystem, including not just some but all its parts and processes” — restoration, this seemingly impossible grappling with the past generates a broad range of values, some of which we will never get by ignoring the past. In Making Nature Whole: A History of Ecological Restoration (2011) Jordan wrote: “The motives behind this new and some ways odd enterprise [of ecocentric restoration] were complicated: a mixture of curiosity, scientific, historic, and aesthetic interest, nostalgia, and respect for the old ecosystems, together with the idea that the old ecosystems are ecologically privileged assemblages of organisms, endowed with distinctive qualities of stability, beauty, and self organizing capacity, and so might be useful as models for human habitat.” Jordan’s work invites us to deal with the full blast of history, to endure it for the sake of the “classic ecosystem” which otherwise won’t survive, and by enduring to understand better our current relationship with the rest of the natural world. In Jordan’s work, failure is an option — sometimes indeed, failure may be the very point.
Let us engage in a little Nietzschean thought experiment of our own. If an ecological manager from today was transported to the future and shown three sites: one minimally influenced by human activity (assuming that such a thing exists), one classically restored, and one that had been classified at the time of the manager’s departure as a novel ecosystem, the manager would not be able to distinguish based solely upon an inspection of their respective ecological properties one category of site from the other with certainty.
Contemporary ecologists have for generations abandoned any expectations that natural systems, even those uninfluenced by human activity, are static. In the absence of human intervention, ecosystems will change, according to some accounts at least in episodic ways, as one ephemerally stable condition gives way to the next. Each stage will be characterized by species combinations that are largely historically unprecedented, as paleoecologists have documented for systems since the Quaternary and even before. Attempts, therefore, to predict the future of “natural” communities are prone to error. The future is indeterminate. In this ecologists agree with an emerging philosophical consensus that the past is realer than the future, and that the present moment is realist of all.
Nor will the future condition of a restored system be readily identifiable to today’s manager. If our time-traveler has with her the SER Primer on Restoration Ecology, an inspection of the expected properties listed there for identifying a restored system would confirm that this difficulty must be the case. Identifying which species of a future assemblage are indigenous — in restored systems the majority of species should be natives according to our contemporary standards — becomes more difficult the further into the future we project. Over sufficiently long time scales, evolutionary forces come into more pronounced play. Additionally, it is conceivable that species not at present within a biogeographic range of a system may become so in due course without human intervention. Thus naturally altered vegetation patterns may not easily distinguished from those caused by deliberate or inadvertent human introductions. Ultimately, the difficulty that our time-traveler will have in identifying today’s restoration efforts projected into the future arises because current restoration thinking acknowledges, as it should, that communities are dynamic, and sound contemporary management practice should not seek to curtail this dynamism.
A novel system, is defined by Hobbs, Higgs and Hall in Novel Ecosystems: Intervening in the New Ecological World Order (2013) as “a system of abiotic, biotic and social components that, by virtue of human influence, differ from those that prevailed historically, having a tendency to self-organize and manifest novel qualities without intensive human management.” A novel system that is currently under management no matter how minimal (this absence of intensive management being a defining aspect of novel systems), would likewise be difficult to distinguish from sites under restoration management or merely undergoing long-term successional change. All sites are subject to the vagaries of dynamic but unpredictable change. One manager’s failed restoration project, or natural successional system, is another’s future novel system.
At first glance one might be inclined to say that the novel ecosystem is an ahistorical concept: history in a deficient-mode: history being conspicuous by its conscious absence. But there is more history involved in the identification of a novel system than might at first be obvious. The identification of novelty depends upon historical analysis. A determination is made by a historically-informed person, that these systems are not classically restorable and have certain emergent properties of value and are therefore worth studying, conserving, and managing, albeit non-intensively. Although, as we noted, novel ecosystems are defined by their lack of need for intensive management, nonetheless when a novel system is providing conservation services and generally functions in a manner that is pleasing then a management regime may be instituted. As soon as this management is enacted the novel ecosystem is thereby governed by a historical reference system even if the historical moment being referred to is but a few moments in the past.
The conclusion that these systems cannot be identified without context should not be interpreted nihilistically, nor should it demotivate us. The point I am making here is that history matters regardless of which paradigm of restoration prevails. The engagement with history can be done objectively but it generates important subjective values. That the novel ecosystem is enmeshed in history is acknowledged by its proponents. Richard Hobbs and colleagues wrote “there is a gravitational pull in our discussions towards historical conditions. In acknowledging novel ecosystems, it is plain that this gravitational pull is sometimes very weak; it remains however, if only as a reminder that the past matters and has mattered.”
It is turtles all the way down, and those turtles are history!
I want to give the last words to Nietzsche. In his view, stretched between vast forgetfulness and the stultifying horrors of forgetting nothing, is a level of reckoning with history that may be helpful for life and restoration. Though as Nietzsche wrote “Forgetting is essential to action of any kind”, nevertheless restoration — classic or associated with novel system management, is always about history, and must therefore reckon the costs of both deliberate but empowering forgetfulness and value-creating but expensive commemoration. Cows, Nietzsche wrote “do not know the difference between yesterday and today …and thus [are] neither melancholy or bored.” The downside, one supposes, is that neither do they know joy nor beauty, and when all is said and down, they are, after all, cattle! An oversaturation with history, on the other hand, can be inimical to life. Nietzsche lists many reasons why too much history can be dangerous (I mention only the one that most pertains to us): it implants a belief, harmful at any time, in the old age of mankind, the belief that one if a latecomer and epigone. The past swells behind us and though it is tempting to think that everything was so much better last week, last year, in previous ages, nonetheless it would be deadening to think of ourselves as anything but a vernal species with a promising future ahead of us. In some case we draw strength and value from total recall, but there are times we must know when to forget. Lord, grant us to wisdom to discern when it’s best to remember and when best to forget.
Monday, September 30, 2013
Food and Power: An Interview with Rachel Laudan
All photos courtesy of Rachel Laudan
Rachel Laudan is the prize-winning author of The Food of Paradise: Exploring Hawaii’s Culinary Heritage, and a co-editor of the Oxford Companion to the History of Modern Science. In this interview, Rachel and I talk about her new book, Cuisine and Empire: Cooking in World History, and her transition from historian and philosopher of science to historian of food.
Rachel Laudan: I can remember when there was no such discipline as history of science! In fact, moving to history of food was a breeze. After all, the making of food from plant and animal raw materials is one of our oldest technologies, quite likely the oldest, and it continues to be one of the most important. The astonishing transformations that occur when, for example, a grain becomes bread or beer, or (later) perishable sugar cane juice becomes seemingly-eternal sugar have always intrigued thinkers from the earliest philosophers to the alchemists to modern chemists. And the making of cuisines is shaped by philosophical ideas about the state, about virtue, and about growth, life, and death.
A lot of food writing is about how we feel about food, particularly about the good feelings that food induces. I'm more interested in how we think about food. In fact, I put culinary philosophy at the center of my book. Our culinary philosophy is the bridge between food and culture, between what we eat and how we relate to the natural world, including our bodies, to the social world, and to the gods, or to morality.EH: Your earlier book, The Food of Paradise, necessarily dealt with food politics and food history. So many cultures were blended into local food in Hawaii. I treasure that book -- almost a miniature of what you’re doing in Cuisine and Empire.
RL: Well, thank you. It came as a surprise to me that I had a subject for a book-length treatment of something to do with food or cooking -- as interested in the subject as I certainly was. The only genre I knew was the cookbook, and I am not cut out to write recipes.The book was prompted by a move to teach at the University of Hawaii in the mid 1980s. I went reluctantly, convinced by the tourist propaganda that the resources of the islands consisted of little more than sandy beaches and grass-skirted dancers doing the hula.
I couldn't have been more wrong. These tiny islands, the most remote inhabited land on earth, have extraordinarily various peoples and environments. They were an extraordinary laboratory for observing the encounter of three radically different cuisines inspired by totally different culinary philosophies.
EH: It wasn’t all that long ago -- going on 18 years -- but you were a pioneer in the approach you took. It was history, not a compendium of anecdotes. And it was a treatment of culinary philosophies. Was there anything to tell you it would be so well received?
RL: Not at all. Mainland publishers were interested only in a book with exotic tropical recipes. I wanted to use the recipes as illustrations of how three cuisines were merged into a fusion cuisine called Local Food. Readers were welcome to cook from them, but that wasn’t their point.The University of Hawaii Press, after some anguishing about whether a mainlander could write a book about the politically touchy subject of foods in Hawaii, took the manuscript. So I was bowled over when it won the Jane Grigson/Julia Child prize of the International Association of Culinary Professionals.
EH: Any publisher might have had more confidence, originally, in your cultural sensitivity, if they’d seen how many cultures you had by then participated in. And the list has grown. You’ve really gotten around.
RL: I have had the luck to have been successively immersed in four distinct cultures: those of England, the United States mainland, Hawaii, and Mexico. Growing up in Britain, I ate the way that many foodies today dream about: local food, entirely home cooked, raw milk from the dairy, home preserved produce from the vegetable garden. I never saw the inside of a restaurant until my teens. When I was 18, before I went to college, I spent a year teaching in one of the first girls' high schools in Nigeria, something that I later realized taught me a lot about the food of that part of the world. In addition, I have lived, shopped and cooked for periods of months in France, Germany, Spain, Australia, and Argentina.
EH: Were you always teaching?
RL: Not always. My husband Larry Laudan and I left academia of our own free will when we were in our 50s, thinking it would be exciting to try something different. We thought lots of others would do the same, but no. It turns out that is unusual.
EH: Unusual, I’ll say! How did you make the shift not only to a new field, but to a more independent life as a scholar and writer?
RL: At the time, I decided to put in cold calls to people I thought were doing interesting work: Joyce Toomre; Barbara Wheaton; Barbara Haber who were working on Russian, French, and American food history in Cambridge, Mass.; Alan Davidson, founder of the Oxford Symposium of Food and Cookery in England; Gene Anderson, the anthropologist and historian of Chinese cuisine; and the food writer Betty Fussell and the nutritionist Marion Nestle in New York. They could not have been more encouraging, inviting me to speak, join their groups, calling from England, and introducing me to others, including Elizabeth Andoh, expert on Japanese cuisine, and Ray Sokolov, then working for the Wall Street Journal, who had just published Why We Eat What We Eat, that examined long-distance exchanges of food. I was buoyed by this sense of community as I jumped fields and left academia.
EH: You weren’t even thinking whether the history of food was a serious area of study, were you?
RL: Not at all. I’ve always believed that if you can show people you are on to an important problem and have things to say about it, they will listen. Soon after I began working on food I spent a year as a research fellow at the now-defunct Dibner Institute for the History of Science and Technology at MIT. There, to the horror of many, I proposed a seminar on the European culinary revolution of the mid- seventeenth century when main dishes flavored with spices and sugar and the acid, often bread or nut-thickened sauces of the Middle Ages were abandoned. They were replaced by a rigid separation of salt and sweet courses and sauces based on fats, as well as by airy drinks and desserts. This was the beginning of high French cuisine.
I argued that this was due to the replacement of Galenic humoral theory by a new theory of physiology and nutrition deriving from the work of Paracelsus and accepted by the physicians in the courts of Europe. Once it became clear that my theory could account very precisely for the change in cuisine, they were all ears. A scholarly version won the Sophie Coe Prize of the Oxford Symposium on Food and Cookery and was published in the pioneering food history journal, Petits Propos Culinaires. And a popular version was later published by Scientific American.
EH: I am moved and impressed that you left academe with a plan. Many people would have just waited by the phone rather than build a new network. Yet your central concerns, as an independent scholar, remained the same as when you were teaching, and have come to full fruition in Cuisine and Empire. Food and technology require to be considered together, do they not?
RL: Indeed they do. Food, after all, is something we make. Plants and animals are simply the raw materials. We don't eat them until we have transformed them into something we regard as edible. Even raw foodists chop, grind, mix, and allow some heating. So I could bring to food history, the hard won conclusions of historians of technology.
EH: What are historians of technology mainly concerned with?
RL: Well, historians of technology are not primarily concerned with inventions. The infamous light bulb was useful only as part of a whole electrical system. Similarly soy sauce, say, or cake, have to be understood as part of whole culinary systems or cuisines. When these are transferred, disseminated, copied, they change the world.
And, perhaps most important, new ideas or prompt changes in technology. They cause cooks, for example, to come up with or adopt new techniques. As the shift to French high cuisine shows, if people change their minds about what healthy food is, they will change their cuisine. When they adopt new religious beliefs, Buddhism or Christianity, say, they abandon meat cooked in the sacrificial fire for enlightenment-enhancing foods such as sugar and rice in the case of Buddhism, or for periods of fasting in the case of Christianity. When they reject monarchy as a political system, as happened in republican Rome, the early Dutch republic, and in the early United States, they reject the extravagant dining associated with reinforcing kingly or imperial power.
So a large part of the book is dedicated to laying out the culinary philosophy underlying each of the world's great cuisines. When that culinary philosophy is transformed, so is the cuisine.
EH: Ah! Just one reason I am so excited about Cuisine and Empire is that I cannot think of anyone else who could take all this on, even if they thought to.
RL: My background in history of science and technology was a big help. It had become clear that this was not simply one damn experiment and discovery after another but shaped by great traditions of scientific inquiry shaped by atomism or Newtonianism or uniformitarianism, to turn to my specialty, geology. And I had explored the parallels between science and technology as cognitive systems, arguing that technology too was not just one invention after another but shaped by traditions of knowledge that, for example, specified materials, techniques, and ways of handling them in say, the evolution of gearing, or interchangeable parts, or jet engines.
My experience in Hawaii had already suggested that there were far reaching traditions in food too. So I asked “If even the history of the foods of Hawaii has to be told in terms of the cross-oceanic, cross-continent expansion of a few great culinary traditions, might not that also be true of world food history?"
Cuisine and Empire answers that with a resounding yes. It's possible to capture most of food history in the last 20,000 years by talking about the expansion of about a dozen different cuisines.
EH: I will be thinking about this book for years and years. I’m already starting to wonder what broad cultural assumptions, that I’ve never thought to identify, much less question, I must bring with me when I cook... These are assumptions about science and technology, too, because science exists within culture. Despite how well prepared -- I want to say uniquely prepared -- you were for writing Cuisine and Empire, it was a tremendously ambitious project, was it not?
RL: It was ridiculously ambitious.
EH: Now, this is a question everyone who writes will understand. Did it ever seem so huge and unwieldy you wanted to chuck it?
RL: More times than I care to admit. What was I writing about? Farming? Cooking? Dining? What were the big turning points? And what about all the regions such as Central Europe and Southeast Asia that got short shrift? On the other hand I had the wonderful gift of time to take on a big project and I didn’t want to fritter it away. So I gritted my teeth, kept re-working my organization, telling myself I was as well prepared as anyone.
EH: How so?
RL: On the practical side, I had grown up on a working farm. And I learned early on that cooking was just as important as farming. One of my earliest memories was the day my father decided he would make bread with the wheat he had grown. At the time, there was no internet to look up how this might be done. He put it in a pestle and pounded it. Nothing but flattened grains, even though many of the archaeologists in our part of the world assumed without experimenting that that was how it was done. He screwed the meat mincer on to the side of the large kitchen table and put the grains through that. Nothing but little lumps. Finally, he put a handful of grains on the flagstone floor and attacked them with a hammer. Fragments scattered all over the kitchen, but still no flour. With barns full of wheat, we could have starved because we did not know how to turn wheat into flour to make bread.
Later I had the chance to shop and cook in Europe, Australia, the USA and Mexico so I had a pretty good grip on a variety of cuisines. In Nigeria and Hawaii, I had experienced cuisines based on roots, not grains. At the University of Hawaii, I taught a wildly popular hands on world history of food, learning a huge amount from my students, almost all of them of Asian ancestry. And in Mexico, women taught me what my father couldn’t, namely how to grind grains into flour.
On the intellectual side, in the course of my academic life I’d also taught social history, an eye-opener about what life, including diet, was like for ordinary people until very recently. And at the University of Hawaii, with its polyglot population, I’d had a chance to talk with many of the pioneers of world history.EH: Unlike when you were writing The Food of Paradise, was there also a wave to catch? In the form of other like minded scholars and writers at work?
RL: A wave? If there was, it was more in world history than in food history, which in spite of the efforts of some fine scholars, did not really become mainstream until a few years ago. World historians such as William McNeill, Philip Curtin, Alfred Crosby and Jerry Bentley -- the latter my colleague at Hawaii -- were drawing on decades of detailed historical scholarship to see if they could trace big patterns of disease, warfare, enslavement, ecological change, and religious conversion.
Why shouldn't I jump into the fray and see if there were big patterns to be traced in food? Surely it was just as important in human history as their topics. I'd always loved making sense of masses of complicated data. Now here was a real challenge.
EH: Rachel, I expect lots of readers for your book. Which other books do you think it will be on the night table with? I’m thinking particularly of Michael Pollan and Bee Wilson -- is there a cogent comparison? I note Paul Freedman blurbed your book, by the way -- along with Naomi Duguid, Anne Willan, and Dan Headrick. Gee, good company!
RL: Well, if mine ends up on the night table with these books, I will be tickled pink. And I think it complements them nicely. Michael Pollan's recent book, wonderfully written as always, is a long meditation on contemporary cooking. I differ from him in not drawing a sharp distinction between cooking and processing. Processing (pre and post industrial) and cooking are on a continuum of stages in food preparation. Bee Wilson's delightful book is also about cooking and full of wonderful historical insights as befits a historian. But whereas she treats themes such as knife, fire, and measure, I organize by the origin, spread, and transformation of cuisines. In my wildest dreams, I would like to think of this as the historical counterpart to Harold McGee’s On Food and Cooking.
EH: Readers will be intrigued by your historical treatment of “processing.” It’s become a bad word –- code for turning food into non-food. I regularly read your blog, so I know you mean it a certain way that looks at the very big picture, including labor economics. But the food you personally like is emphatically not processed…
RL: Not if you limit “processed” to what many call junk food. I’ve never acquired a taste for fast-food hamburgers or soft drinks, have never eaten Wonder Bread or its siblings, and cook at home six nights out of seven. Picky is what I am. At the same time though, I think that we hinder our understanding of food if we don’t understand that all our food, with the exception of a few fruits, has been transformed, that is, processed, before we eat it. The foods that humans eat are one of their greatest creations, one of their greatest arts in that dual sense of technique and aesthetics, and we should celebrate that they are artifacts, not bemoan it. Like all human creations, some foods are better than others, and should be judged as such, but they are all creations.
EH: So there! How do cuisines speak to you personally -- as someone who loves food and cooking? If a cuisine does reveal a culture, then would tasting and analyzing it be as telling as listening to a poem or seeing a drama?
RL: Absolutely. Every time you go into the kitchen, you take your culture with you. As you plan a meal for guests, say, you bring to it assumptions about how to mesh their preferences with yours, about how much it is appropriate to spend on the meal, about how to accommodate their religious or ethical food rules, and about what they believe to be healthy and delicious.
I like to play a little game with myself when I go to a different country or meet someone from a different background. Knowing the history of that place or the heritage of that person, can I guess what the cuisine will be like? Or conversely, if presented with a meal, can I read it, dissecting, say, the noodles, the condiments, and the meat to tell a story about how it evolved over the centuries? And the answer is almost always yes.
EH: What holds a cuisine together?
RL: Again it was Hawaii that gave me the clue. It was not the local plants and animals because Hawaii had almost nothing edible before humans arrived. It was systems of belief or ideas or culture. The Pacific Islanders all valued taro, which had a place in their traditional religion, they all had a variant of the same herbal medicine. The Asians (apart from the Filipinos) had all been touched by Buddhism with its veneration of rice, and all subscribed to some form of humoral theory. And the Anglos came from a Christian tradition that placed high importance on raised bread and they followed modern nutritional theory.
EH: You have empires in the title, but you haven’t mentioned them yet. Where do they fit in?
RL: Empires have been the most widely spread form of political organization and as such the major theater in which cuisines have been created and disseminated. It's not a case of one empire, one cuisine, though. Because aspiring leaders always copy and adapt the customs of what they see as successful rivals, cuisines were copied and adapted from one empire to another. In the ancient world, for example, Persian cuisine was copied and adapted by the Indians and the Greeks, and then the Romans copied and adapted Greek cuisine.
EH: So cuisines spread from empire to empire. Is it a coherent story all around the world?
RL: Amazingly, yes. Beginning with the first states, interlinked barley-wheat cuisines underpin all the early empires. Then in the next phase, Buddhism transforms cuisines of eastern Asia, followed by the Islamic transformation of cuisines from Southeast Asia in the east to parts of Africa and Spain in the west (and the shaping of the Catholic cuisines of medieval Europe), and Catholic cuisines transform the cuisines of most of the Americas in the sixteenth century. Protestant critiques open the way to modern cuisines in Europe, with the rest of the world quick to make similar changes. Protestant-inspired high French cuisine becomes world high cuisine, Anglo cuisines create a middle way between high and humble cuisines, a middle way that is copied from Japan to Latin America in late nineteenth century. Although there are countless wrinkles, exceptions, and idiosyncrasies, at the core is a simple, coherent story of a few big families of cuisine and three major stages.
EH: If empires spread cuisines, does the reverse apply? Does food affect the success of empires, or smaller states? I have read in Jared Diamond about food affecting the success or failure of a whole society – the Norse colony in Greenland, whose people starved rather than ate fish for instance. What about embracing a culturally new food for political reasons?
RL: Certainly most people in the past believed that food could affect the success or failure of a whole society. At the end of the nineteenth century, for example, leaders around the world looked at what seemed to be the unstoppable expansion of the Anglo world, that is, the British Empire and the United States of America.
One explanation was that Anglo strength derived from a cuisine based on white wheaten bread and beef served at family meals. Unlike alternative explanations such as the special characteristics of Anglos or their upbringing in bracing climates, this offered a strategy for countering this expansion. If you could persuade your subjects or citizens to abandon corn or rice or cassava, and shift to bread or pasta, if you could persuade them to eat more meat, if you could persuade them to eat as families, then they might become stronger.
EH: Well, I’m naïve, then. “Eating as a family” is not a given across cultures? Please tell me more.
RL: The importance of the family meal as the foundation of society and the state is so deeply ingrained in the American tradition that it’s hard to appreciate just how American it is, perhaps inherited from Dutch settlers. Of course many meals were prepared in the home throughout history, though institutional food was more important than we realize. Just think of the courts, the military, the religious orders, as well as prisons, boarding schools, poor houses, and so on. Just think of the pictures of dining in the past and how rarely it is a family that is depicted. Who you ate with reflected rank rather than family ties.
But even when prepared in the home, the meal was often very different from that depicted in Norman Rockwell’s “Freedom from Want.” The children might eat in the nursery, as in nineteenth-century middle class England. Or the father might eat in a different place and at a different time from the wife, as in Japan. Or the father might eat food prepared by different wives on different days, as in Nigeria. Or the meal might include unrelated apprentices and farmhands. So to many societies, the idea of the communal family meal as offering both physical and moral/social nourishment was a novelty.
EH: And the shift to bread, pasta, and meat?
RL: Even in the United States, there were concerted efforts to persuade southerners, particularly in the Appalachians, to abandon corn bread for biscuits of wheat flour. And Brazilians, Mexicans, Venezuelans, Colombians, Indians, and Chinese debated, and often put in place policies to bring about this change. The most successful efforts were in Japan where the diets of the military and of people living in cities were changed to add more meat, more fat, more wheat, and to introduce family meals.
EH: Ah! Taking on the strength of the aggressor, or of the dominant culture! I wonder who’s doing that right now, and with regard to whose food… I’m fascinated with the cover of Cuisine and Empire. I know it’s a Japanese print. I wanted it to be the Jesuits, but that’s centuries off the mark.
RL: It’s a print in the Library of Congress collection by the Japanese artist, Yoshikazu Utagawa, made in 1861 just a few years after the forcible opening of Japan to the West. It shows two Americans, great big fellows, one of them baking bread in a beehive oven and the other preparing a dish over a bench top stove. I chose it because it so nicely illustrates the themes of the book. It puts the kitchen at the center. And it shows the keen interest that societies took in observing, and often copying, the cuisines of rivals.
EH: The kitchen at the center of history -- a beautiful phrase. The book launches very soon.
RL: I believe the official launch date is in November. Copies, though, will be available this week.
EH: Well, mine will arrive today or tomorrow. Thank you so much for this fascinating preview and discussion. I’m already thinking how to incorporate 20,000 years of causality into the book party menu.
A different version of this interview, emphasizing gastronomy in history, is available at The Rambling Epicure.
Read Rachel’s article for SaudiAramco World on the Islamic influence on Mexican Cuisine
Read Rachel’s personal blog, “A Historian’s Take on Food and Food Politics” at http://www.rachellaudan.com/Live in or around Boston? Come with me to a talk by Rachel Laudan the evening of October 28 at BU!
Monday, August 19, 2013
by Jalees Rehman
The "Reclaim Scientism" movement is gaining momentum. In his recent book "The Atheist's Guide to Reality: Enjoying Life without Illusions", the American philosopher Alexander Rosenberg suggests that instead of viewing the word "scientism" as an epithet, atheists should expropriate it and use it as a positive term which describes their worldview. Rosenberg also provides a descriptive explanation of how the term "scientism" is currently used:
Scientism — noun; scientistic — adjective.
Scientism has two related meanings, both of them pejorative. According to one of these meanings, scientism names the improper or mistaken application of scientific methods or findings outside their appropriate domain, especially to questions treated by the humanities. The second meaning is more common: Scientism is the exaggerated confidence in the methods of science as the most (or the only) reliable tools of inquiry, and an equally unfounded belief that at least the most well established of its findings are the only objective truths there are.
Rosenberg's explanation of "scientism" is helpful because it highlights the difference between science and scientism. Science refers to applying scientific methods as tools of inquiry to collect and interpret data, whereas "scientism" refers to cultural and ideological views promoting the primacy or superiority of scientific methods over all other tools of inquiry. Some scientists embrace scientistic views, in part because scientism provides a much-needed counterbalance to aggressive anti-science attitudes that are prevalent on both ends of the political spectrum and among some religious institutions. However, other scientists are concerned about propping up scientism as a bulwark against ideological science-bashing because it smacks of throwing out the baby with the bathwater. Science is characterized by healthy skepticism, the dismantling of dogmatic views and a continuous process of introspection and self-criticism. Infusing science with ideological stances concerning the primacy of the scientific method could undermine the power of science which is rooted in its willingness to oppose ideological posturing.
As a scientist who investigates signaling mechanisms and the metabolic activity of stem cells, I am concerned about the rise of some movements that fall under the "scientism" umbrella, because they have the possibility to impede scientific discovery. Scientific progress relies on recognizing the limitations and flaws in existing scientific concepts and refuting scientific views that cannot be adequately explained by newer scientific observations. An exaggerated confidence in the validity of scientific findings could stifle such refutations. For example, some of the most widely cited scientific papers in the field of stem cell biology cannot be replicated, but they have had an enormous detrimental impact on the science and medicine, in part because of an exaggerated faith in the validity of some initial experiments.
I first began studying the use of stem and progenitor cells to enhance cardiovascular repair and regeneration over a decade ago. At that time, many of my colleagues and I were excited about a recent paper published by a group of scientists based at New York Medical College in the high-profile scientific journal Nature in 2001. The paper suggested that injected adult bone marrow stem cells could be successfully converted into functional heart cells and recover heart function after a heart attack by generating new heart tissue. The usage of adult regenerative cells was a very attractive option because it would allow patients to be treated with their own cells and could circumvent the ethical and political controversies associated with embryonic stem cells. This animal study gained even more traction when supportive experimental and human studies were published by other scientists. Then a German research group under the direction of the cardiologist Bodo Strauer published a paper in 2002 which showed that not only could adult human bone marrow cells be safely injected into heart attack patients but that these adult cells even appeared to improve heart function.
The stir caused by these discoveries was not just confined to scientists. The findings were widely reported in the media and I recall numerous discussions with physicians who claimed that cardiovascular disease would soon be a problem of the past, because patients would receive routine bone marrow injections after heart attacks. One colleague even advised me to reconsider my career choices since the usage of bone marrow cells could address most if not all issues in cardiovascular regeneration.
This excitement was somewhat dampened when a refutation of the 2001 Nature paper was published in 2004, also in the journal Nature. A collaborative effort of two US-based stem cell research groups was not able to replicate the findings of the 2001 paper. The scientists were unable to find any significant conversion of adult bone marrow cells into functional heart cells. However, many physicians, scientists and patients had already adopted an unshakable belief in the validity of the bone marrow cell treatments after heart attacks. Hundreds of heart attack patients were being enrolled in clinical trials involving the injection of bone marrow cells. Clinics in Thailand or Mexico began offering bone marrow injections to heart patients from all around the world– for a hefty price, both in terms of monetary payments and in terms of safety because they exposed patients to the risks of invasive injections of bone marrow cells into their hearts.
Despite the fact that the initial clinical studies with small numbers of enrolled patients had shown a beneficial effect of bone marrow cell injections, subsequent trials could not confirm these early successes. It became apparent that even if bone marrow cell injections did exert a therapeutic benefit in heart attack patients, these benefits were rather modest. Scientists increasingly realized that the observed benefits may have been causally unrelated to the small fraction of stem cells contained within the bone marrow. Instead of bone marrow stem cells becoming functional heart cells, some bone marrow cells may have merely released protective proteins which could explain the slight improvement in heart function, without necessarily generating new heart tissue. One of the largest bone marrow cell treatment trials for heart attack patients to date was just recently published in 2013 and showed no evidence of improved heart function following the cell injections.
In hindsight, many of us have wondered why we were not more skeptical of the initial findings. When compared to embryonic stem cells, adult bone marrow stem cells have a very limited ability to differentiate into cell types other than those typically found in the bone marrow. Furthermore, the clinical studies which reported successful treatment of heart attack patients used unpurified bone marrow cells from the patients. The stem cell content of such unpurified preparations is roughly 1% or less, which means that 99% of the injected bone marrow cells were NOT stem cells. For the tiny fraction of bona fide stem cells in the bone marrow to convert into sufficient numbers of beating heart cells and even create new functional heart tissue would have been akin to a miracle.
Critical thinking and healthy skepticism, the scientific peer review processor and even common sense should have alerted us to the problems associated with these claims, but they all failed. Perhaps scientists, physicians and patients were so excited by the prospect of creating new heart tissue that they suspended much-needed skepticism. Exaggerated confidence in the validity of the scientific data published in highly regarded scientific journals may have played an important role. Unintentional cognitive biases of scientists who conducted the experiments and a disregard for alternative explanations could have also contributed to the propagation of ideas that would withstand subsequent testing. Scientific misconduct may have also been a factor, as the cardiologist who conducted the first clinical studies with bone marrow cell infusions in heart attack patients is currently under investigation for massive errors in how the experiments were conducted and reported.
This is just one example to illustrate problems associated with an exaggerated confidence in the validity of scientific findings, a kind of confidence which scientism engenders. Such examples are by no means restricted to stem cell biology. A recent analysis of scientific reproducibility in cancer research claimed that only 11% of published cancer biology papers could be independently validated, and other areas of scientific research may be similarly afflicted by the problem of irreproducibility of published, peer-reviewed scientific papers.
Increasing numbers of scientists are recognizing that current approaches to interpreting and publishing scientific data are severely flawed. Exaggerated confidence in the validity of scientific findings is frequently misplaced and claims that scientific results represent objective truths need to be re-evaluated particularly when a high percentage of experimental results cannot be replicated by fellow scientists. In this particular context, the views of scientists who are trying to learn lessons from the failures of the scientific peer review process are not so different from those of "scientism" critics. However, many scientists, myself included, remain reluctant to use the expression "scientism".
Rosenberg illustrates the problems associated with the word "scientism". Since "scientism" is often used as an epithet, invoking "scientism" may impede constructive discussions about the appropriateness of applying scientific methods. While a question such as "Can issues of morality be answered by scientific experiments?" may be important, introducing the term "scientism" with all its baggage distracts from addressing the question in a rational manner.
The other major issue associated with the term "scientism" is its vagueness. It is difficult to discuss "scientism" if it encompasses a broad range of distinct concepts such as the notion that science has to remain within certain boundaries as well as a criticism of overweening confidence in the validity of scientific findings. I can easily identify with asking for a realistic reappraisal of whether or not scientific results obtained by one laboratory constitute an objective, scientific truth but I am opposed to creating boundary lines that forbid certain forms of scientific inquiry because it might infringe on the domains of the humanities. Instead of using the diffuse expression "scientism", I have thus introduced the term "science mystique" to criticize the exaggerated, near-mythical confidence in the infallibility of scientific results.
Rosenberg's view that the expression "scientism" and also the culture of "scientism" should be embraced received a big boost when the scientist Steven Pinker published his polemic essay "Science Is Not Your Enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians". Like Rosenberg, Pinker wants to rehabilitate the expression "scientism" and use it to indicate a positive, science-affirming worldview. Unfortunately, instead of engaging in a constructive dialogue about the culture of "scientism", Pinker reveals his condescending attitude towards the humanities throughout the essay. His notion of respect for the humanities consists of pointing out how much better off classical philosophers might have been if they had been aware of modern neuroscience. But Pinker does not comment on the converse proposition: Would scientists be better off if they knew more about philosophy? Pinker goes on to portray scientists as dynamic forward thinkers, while humanities scholars are supposedly weighed down by their intellectual inertia:
"Several university presidents and provosts have lamented to me that when a scientist comes into their office, it's to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it's to plead for respect for the way things have always been done."
Pinker glosses over the reproducibility issues in science and reaffirms his faith in the current system of scientific peer review without commenting on the limitations of scientific peer review:
"Scientism, in this good sense, is not the belief that members of the occupational guild called "science" are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable."
The philosopher and scientist Massimo Pigliucci has written an excellent response to Steven Pinker, which discusses the flaws inherent in Pinker's polemic and explains why promoting a culture of scientism or a "science mystique" is not in the interest of science. I also agree with the physicist Sean Carroll who reminds us that we should get rid of the term "scientism"; not because he wants to get rid of a critical evaluation of science, but because he thinks this poorly defined term is not very helpful.
Whether or not we use the word "scientism", it is apparent that the debates between the critics and defenders of the culture of "scientism" are here to stay. It is unlikely that rehabilitating the unhelpful word "scientism" or polemical stances towards the humanities will contribute to this debate in a meaningful manner. The challenge for scientists and non-scientists is to embrace and address the legitimate criticisms of science without promoting the agenda of irrational anti-science bashing.
Monday, July 22, 2013
Three Seconds: Poems, Cubes and the Brain
by Jalees Rehman
A child drops a chocolate chip cookie on the floor, immediately picks it up, looks quizzically at a parental eye-witness and proceeds to munch on it after receiving an approving nod. This is one of the versions of the "three second rule", which suggests that food can be safely consumed if it has had less than three seconds contact with the floor. There is really no scientific basis for this legend, because noxious chemicals or microbial flora do not bide their time, counting "One one thousand, two one thousand, three one thousand,…" before they latch on to a chocolate chip cookie. Food will likely accumulate more bacteria, the longer it is in contact with the floor, but I am not aware of any rigorous scientific study that has measured the impact of food-floor intercourse on a second-to-second basis and identified three seconds as a critical temporal threshold. Basketball connoisseurs occasionally argue about a very different version of the "three second rule", and the Urban Dictionary provides us with yet another set of definitions for the "three second rule", such as the time after which one loses a vacated seat in a public setting. I was not aware of any of these "three second rule" versions until I moved to the USA, but I had come across the elusive "three seconds" time interval in a rather different context when I worked at the Institute of Medical Psychology in Munich: Stimuli or signals that occur within an interval of up to three seconds are processed and integrated by our brain into a "subjective present".
I joined the Institute of Medical Psychology at the University of Munich as a research student in 1992 primarily because of my mentor Till Roenneberg. His intellect, charm and infectious enthusiasm were simply irresistible. I scrapped all my plans to work on HIV, cancer or cardiovascular disease and instead began researching the internal clock of marine algae in Till's laboratory – in an Institute of Medical Psychology. Within weeks of working at the institute, I realized how fortunate I was. Ernst Pöppel, one of Germany's leading neuroscientists and the director of the institute, had created a multidisciplinary research heaven. Ernst assembled a team of remarkably diverse researchers who studied neurobiology, psychology, linguistics, mathematics, philosophy, endocrinology, cell physiology, marine biology, computer science, ecology – all on the same floor. Since I left the institute nearly 20 years ago, I have worked in many academic departments at various institutions, each claiming to value multidisciplinary studies, but I have never again encountered any place that has been able to successfully integrate natural sciences, social sciences and the humanities in the same way as the Munich institute.
The central, unifying theme of the institute was time. Not physical time, but biological and psychological time. How does our brain perceive physical time? What is the structure of perceived time? What regulates biological oscillations in humans, animals and even algae? Can environmental cues modify temporal perception? The close proximity of so many disciplines made for fascinating coffee-break discussions, forcing us to re-evaluate our own research findings in the light of the discoveries made in neighboring labs and inspired us to become more creative in our experimental design.
Some of the most interesting discussions I remember revolved around the concept of the subjective present, i.e. the question of what it is that we perceive as the "now". Our brain continuously receives input from our senses, such as images we see, sounds we hear or sensations of touch. For our brain to process these stimuli appropriately, it creates a temporal structure so that it can tell apart preceding stimuli from subsequent stimuli. But the brain not only assigns a temporal order to the stimuli, it also integrates them and conveys to us a sense of the subjective past and the subjective present. We often use vague phrases such as "living in the moment" and we all have a sense of what is the "now", but we do not always realize what time intervals we are referring to. If we just saw an image or heard a musical note one second ago, physical time would clearly place them in "the past". Decades of research performed by Ernst Pöppel and his colleagues at the institute, as well as several other laboratories around the world, suggest that our brain integrates our subjective temporal reality in chunks of approximately three second duration.
Temporal order can be assessed in a rather straightforward experimental manner. Research subjects can be provided sequential auditory clicks, one to each ear. If the clicks are one second apart, nearly all participants can correctly identify whether or not the click in the right ear came before the one in the left ear. It turns out that this holds true even if the clicks are only 100 milliseconds (0.1 seconds) apart. The threshold for being able to correctly assign a temporal order to such brief stimuli lies around 30 milliseconds for young adults (up to 25 years old) and 60 milliseconds for older adults.
Temporal integration of stimuli, on the other hand, cannot be directly measured through experiments. It is not possible to ask research subjects "Are these two stimuli part of your now?" and expect a definitive answer, because everyone has a different concept and definition of what constitutes "now". Therefore, researchers such as Ernst Pöppel have had to resort to indirect assessments of temporal integration, and ascertain what interval of time is grasped as a perceptual unit by our brain. An excellent summary of the work can be found in the paper "A hierarchical model of temporal perception". Instead of reviewing the hundreds of experiments that have lead researchers to derive the three-second interval, I will just review two studies which I believe are among the most interesting.
In one of the studies, Pöppel partnered up with the American poet Frederick Turner. Turner and Pöppel recorded and measured hundreds of Latin, Greek, English, Chinese, Japanese, French and German poems, analyzing the length of each LINE. They used the expression LINE to describe a "fundamental unit of metered poetry". In many cases, a standard verse or line in a poem did indeed fit the Turner-Pöppel definition of a LINE, but they used the more generic LINE for their analysis because not all languages or orthographic traditions write or print a LINE in a separate space as is common in English or German poems. If a long line in a poem was divided by a caesura into two sections, Turner and Pöppel considered this to be two LINES.
The basic idea behind this analysis was that each unit of a poem (LINE) conveys one integrated idea or thought, and that the reader experiences each LINE as a "now" moment while reading the poem. Turner and Pöppel published their results in the classic essay "The Neural Lyre: Poetic Meter, the Brain, and Time" for which they also received the Levinson Prize in 1983. Their findings were quite remarkable. The peak duration of LINES in poems was between 2.5 seconds and 3.5 seconds, independent of what language the poems were written in. For example, 73% of German poems had a LINE duration between 2 and 3 seconds. Here are some their other specific findings:
Epic meter (a seven-syllable line followed by a five-syllable one) (average) 3.25 secs.
Waka (average) 2.75 secs.
Tanka (recited much faster than the epic, as 3 LINES of 5, 12, and 14 syllables) (average) 2.70 secs.
Four-syllable line 2.20 secs.
Five-syllable line 3.00 secs.
Seven-syllable line 3.80 secs.
Pentameter 3.30 secs.
Seven-syllable trochaic line 2.50 secs.
Stanzas using different line lengths 3.00 secs., 3.10 secs.
Ballad meter (octosyllabic) 2.40 secs.
Poets all around the world did not conspire to write three-second LINES. It is more likely that our brain may be attuned to processing poetic information in 3 second chunks and that poets are subconsciously aware of this. This was not a controlled, rigorous scientific study, but the results are nevertheless fascinating, not only because they points towards the three second interval that neuroscientists have established in recent decades for temporal integration in the brain, but also because they suggest that the rules for metered poetry may be universal. I strongly advise everyone to read the now classic essay by Turner and Pöppel, to then try reading aloud their own favorite poems and see if the LINES indeed approximate three seconds.
A second approach to glean into the inner workings of temporal integration process in our brain is the use of perceptual reversal experiments, such as those performed with the Necker cube. This cube is a 2-D line drawing, which our brain perceives as a cube – or actually two distinct cubes. Most people who stare at the drawing for a while will notice that their mind creates two distinct cube representations. Once the mind perceives the two different cubes, it becomes very difficult to cling to just one cube representation. Our brain starts flip-flopping between the two cubes; even when we try our best to just hang on to one of the cube representations in our mind. Interestingly, the average duration that it takes for our mind to automatically shift from one cube representation to the other one approximates three seconds.
Nicole von Steinbüchel, a colleague of Ernst Pöppel at the Institute of Medical Psychology, asked a fascinating question. If the oscillatory perceptual shift between the two cube representations is indeed indicative of the "subjective present" and the temporal integration capacity, would brain injury affect the oscillation? She studied patients who had brain lesions (usually due to a stroke) in either the left or right hemisphere of the brain. She and her team of researchers were able to show that while healthy participants reported a three second interval between the automatic shifting of the cube representations in their brain, the average shift time was four seconds in patients with brain damage in the left brain hemisphere and up to six seconds if the damage had occurred in a certain part of the right brain hemisphere. Nicole von Steinbüchel's research demonstrates the clinical relevance of studying temporal integration, but it also suggests that the brain may have designated areas which specialize in creating a temporal structure.
The analysis of poetry and the Necker cube experiments are just two examples of cognitive studies indicating that our brain uses three second intervals to process information and generate the experience of the "now" or the "subjective present". Taken alone, none of these studies are a conclusive proof that our brain uses three second intervals, but one cannot help but notice a remarkable convergence of data pointing towards a cognitive three second rule.
Frederick Turner and Ernst Pöppel (1983) "The Neural Lyre: Poetic Meter, the Brain, and Time" Poetry 142(5): 277-309. A reprint also available online here: http://www.cosmoetica.com/B22-FT2.htm
Ernst Pöppel (1997) "A hierarchical model of temporal perception" Trends in Cognitive Sciences 1(2): 56-61.
Nicole von Steinbüchel (1998) "Temporal ranges of central nervous processing: clinical evidence" Experimental Brain Research 123 (1-2): 220-233.
Monday, February 04, 2013
The Science Mystique
by Jalees Rehman
Many of my German high school teachers were intellectual remnants of the “68er” movement. They had either been part of the 1968 anti-authoritarian and left-wing student protests in Germany or they had been deeply influenced by them. The movement gradually fizzled out and the students took on seemingly bourgeois jobs in the 1970s as civil servants, bank accountants or high school teachers, but their muted revolutionary spirit remained on the whole intact. Some high school teachers used the flexibility of the German high school curriculum to infuse us with the revolutionary ideals of the 68ers. For example, instead of delving into Charles Dickens in our English classes, we read excerpts of the book “The Feminine Mystique” written by the American feminist Betty Friedan.
Our high school level discussion of the book barely scratched the surface of the complex issues related to women’s rights and their portrayal by the media, but it introduced me to the concept of a “mystique”. The book pointed out that seemingly positive labels such as “nurturing” were being used to propagate an image of the ideal woman, who could fulfill her life’s goals by being a subservient and loving housewife and mother. She might have superior managerial skills, but they were best suited to run a household and not a company, and she would need to be protected from the aggressive male-dominated business world. Many women bought into this mystique, precisely because it had elements of praise built into it, without realizing how limiting it was to be placed on a pedestal. Even though the feminine mystique has largely been eroded in Europe and North America, I continue to encounter women who cling on to this mystique, particularly among Muslim women in North America who are prone to emphasize how they feel that gender segregation and restrictive dress codes for women are a form of “elevation” and honor. They claim these social and personal barriers make them feel unique and precious.
Friedan’s book also made me realize that we were surrounded by so many other similarly captivating mystiques. The oriental mystique was dismantled by Edward Said in his book “Orientalism”, and I have to admit that I myself was transiently trapped in this mystique. Being one of the few visibly “oriental” individuals among my peers in Germany, I liked the idea of being viewed as exotic, intuitive and emotional. After I started medical school, I learned about the “doctor mystique”, which was already on its deathbed. Doctors had previously been seen as infallible saviors who devoted all their time to heroically saving lives and whose actions did not need to be questioned. There is a German expression for doctors which is nowadays predominantly used in an ironic sense: “Halbgötter in Weiß” – Demigods in White.
Through persistent education, books, magazine and newspaper articles, TV shows and movies, many of these mystiques have been gradually demolished.It has become common knowledge that women can be successful as ambitious CEOs or as brilliant engineers. We now know that “Orientals” do not just indulge their intuitive mysticism but can become analytical mathematicians. People readily accept the fact that doctors are human, they make mistakes and their medical decisions can be influenced by pharmaceutical marketing or by spurious squabbles with colleagues. One of my favorite TV shows was the American medical comedy Scrubs, which gave a surprisingly accurate portrayal of what it meant to work in a hospital. It was obviously fictional and contained many exaggerations to increase its comedic impact, but I could relate to many of the core themes presented in the show. The daily frustrations of being a physician-in-training or a senior attending physician, the fact that physicians make mistakes, the petty fights among physicians that can negatively impact their patients, the immense stress of having to deal with patients who cannot be helped, financial incentives, physicians and nurses with substance abuse problems – these were all challenges that either I or my friends and colleagues had experienced.
One lone TV show such as Scrubs cannot be credited for taking down the “doctor mystique”, but it did provide a vehicle for us physicians to talk about the “dark side of medicine”. Speaking about flawed clinical decision-making and how personal emotions can affect our interactions with patients is not easy for physicians, because this form of introspection can lead to paralyzing guilt. All physicians know they make mistakes, and even though we ourselves do not buy into the “doctor mystique”, we may still feel the burden of having live up to it. I remember how I used to discuss some of the Scrubs episodes with other physicians and these light-hearted conversations about funny scenes in the TV show sometimes led to deeper discussions about our own personal experiences and the challenges we faced in our profession.
Being placed on a pedestal is a form of confinement. Dismantling mystiques not only liberates the individuals who are being mystified, but it can also benefit society as a whole. In the case of the doctor mystique, patients are now more likely to question the decisions of physicians, thus forcing doctors to explain why they are prescribing certain medications or expensive procedures. The internet enables patients to obtain information about their illnesses and treatment options. Instead of blindly following doctors’ orders, they want to engage their doctor in a discussion and become an integral part of the decision-making process. The recognition that gifts, free dinners and honoraria paid by pharmaceutical companies strongly influence what medications doctors prescribe has led to the establishment of important new rules at universities and academic journals to curb this influence. Many medical schools now strongly restrict interactions between pharmaceutical company representatives and physicians-in-training. Academic journals and presentations at universities or medical conferences require a complete disclosure of all potential financial relationships that could impact the objectivity of the presented data. Some physicians may find these regulations cumbersome and long for the “mystique” days when their intentions were not under such scrutiny, but many of us think that these changes are making us better physicians and improving medical care.
As I watch many of these mystiques crumble, one mystique continues to persist: The Science Mystique. As with other mystiques, it consists of a collage of falsely idealized and idolized notions of what science constitutes. This mystique has many different manifestations, such as the firm belief that reported scientific findings are absolutely true beyond any doubt, scientific results obtained today are likely to remain true for all eternity and scientific research will be able to definitively solve all the major problems facing humankind. This science mystique is often paired with an over-simplified and reductionist view of science. Some popular science books, press releases or newspaper articles refer to scientists having discovered the single gene or the molecule that is responsible for highly complex phenomena, such as a disease like cancer or philosophical constructs such as morality. I was recently discussing a recent paper on wound healing and I came across an intriguing comment in a public comment thread: “When I read an article related to science it puts me in the mindset of perfection and credibility”. This is just one anecdotal comment, but I think that it captures the Science Mystique held by many non-scientists who place science on a pedestal of perfection.
As flattering as it may be, few scientists see science as encapsulating perfection. Even though I am a physician, most of my time is devoted to working as a cell biologist. My laboratory currently studies the biology of stem cells and the role of mitochondrial metabolism in stem cells. In the rather antiquated division of science into “hard” and “soft” sciences, where physics is considered a “hard” science and psychology or sociology are considered “soft” sciences, my field of work would be considered a middle-of-the-road, “firm” science. As cell biologists, we are able to conduct well-defined experiments, falsify hypotheses and directly test cause-effect relationships. Nevertheless, my experience with scientific results is that they are far from perfect and most good scientific work usually raises more questions than it provides answers. We scientists are motivated by our passion for exploration, and we know that even when we are able to successfully obtain definitive results, these findings usually point out even greater deficiencies and uncertainties in our knowledge. Stuart Firestein’s wonderful book “Ignorance: How It Drives Science” is a sincere and eloquent testimony to the key role of ignorance in scientific work. A thoughtful “I do not know the answer to this” uttered by a scientist is typically seen as a sign of scientific maturity, because it shows humility of the scientist and indicates a potential new direction for scientific research. On the other hand, when a scientist proudly proclaims to have found the most important gene or having defined the most important pathway for a certain biological process, it frequently indicates a lack of understanding of the complexity of the matter at hand.
One key problem of science is the issue of reproducibility. Psychology is currently undergoing a soul-searching process because many questions have been raised about why published scientific findings have such poor reproducibility when other psychologists perform the same experiments. One might attribute this to the “soft” nature of psychology, because it deals with variables such as emotions that are difficult to quantify and with heterogeneous humans as their test subjects. Nevertheless, in my work as a cell biologist, I have encountered very similar problems regarding reproducibility of published scientific findings. My experience in recent years is that roughly only half the published findings in stem cell biology can be reproduced when we conduct experiments according to the scientific methods and protocols of the published paper.
This estimate of 50% reproducibility is not a comprehensive analysis. We only attempt to replicate findings which are highly relevant to our work and which are published in a select group of scientific journals. If we tried to replicate every single paper in the field of stem cell biology, the success rate might be even lower. On the other hand, we devote a limited amount of time and resources to replicating results, because there is no funding available for replication experiments. It is possible that if we devoted enough time and resources to replicate a published study, tinkering with the different methods, trying out different batches of stem cells and reagents, we might have a higher likelihood of being able to replicate the results. Since negative studies are difficult to publish, these failed attempts at replication are buried and the published papers that cannot be replicated are rarely retracted. When scientists meet at conferences, they often informally share their respective experiences with attempts to replicate research findings. These casual exchanges can be very helpful, because they help us ensure that we do not waste resources to build new scientific work on the shaky foundations of scientific papers that cannot be replicated.
In addition to knowing that a significant proportion of published scientific findings cannot be replicated, scientists are also aware of the fact that scientific knowledge is dynamic. Technologies used to acquire scientific data are continuously changing and the new scientific data amassed during any single year by far outpaces the capacity of scientists to fully understand and analyze it. Most scientists are currently struggling to keep up with the new scientific knowledge in their own field, let alone put it in context with the existing literature. As I have previously pointed out, more than 30-40 scientific papers are published on average on any given day in the field of stem cell biology. This overwhelming wealth of scientific information inevitably leads to a short half-life of scientific knowledge, as Samuel Arbesman has expressed in his excellent book “The Half-Life of Facts”. What is considered a scientific fact today may be obsolete within five years. The books by Firestein and Arbesman are shining examples among the plethora of recent popular science books, because they explain why scientific knowledge is so ephemeral and yet so important. Hopefully, these books will help deconstruct the Science Mystique.
One aspect of science that receives comparatively little attention in popular science discussions is the human factor. Scientific experiments are conducted by scientists who have human failings, and thus scientific fallibility is entwined with human frailty. Some degree of limited scientific replicability is intrinsic to the subject matter itself. A paper on cancer cells published by one group of researchers may use a different set of cancer cells obtained from their patients than those available to other researchers. At other times, researchers may make unintentional mistakes in interpreting their data or may unknowingly use contaminated samples. One can hardly blame scientists for heterogeneity of their tested samples or for making honest errors. However, there are far more egregious errors made by scientists that have a major impact on how science is conducted. There are cases of outright fraud, where researchers just manufacture non-existent data, but these tend to be rare and when colleagues and scientific journals or organizations become aware of these cases of fraud, published papers are retracted and scientists face punitive measures. Such overt fraud tends to be unusual, and of the hundred or more scientific colleagues who I have personally worked with, I do not know of any one that has committed such fraud. However, what occurs far more frequently than gross fraud is the gentle fudging of scientific data, consciously or subconsciously, so that desired scientific results are obtained. Statistical outliers are excluded, especially if excluding them helps direct the data in the desired direction. Like most humans, scientists also have biases and would like to interpret their data in a manner that fits with their existing concepts and ideas.
Human fallibility not only affects how scientists interpret and present their data, but can also have a far-reaching impact on which scientific projects receive research funding or the publication of scientific results. When manuscripts are submitted to scientific journals or when grant proposal are submitted to funding agencies, they usually undergo a review by a panel of scientists who work in the same field and can ultimately decide whether or not a paper should be published or a grant funded. One would hope that these decisions are primarily based on the scientific merit of the manuscripts or the grant proposals, but anyone who has been involved in these forms of peer review knows that, unfortunately, personal connections or personal grudges can often be decisive factors.
Lack of scientific replicability, knowing about the uncertainties that come with new scientific knowledge, fraud and fudging, biases during peer review – these are all just some of the reasons why scientists rarely believe in the mystique of science. When I discuss this with acquaintances who are non-scientists, they sometimes ask me how I can love science if I have encountered these “ugly” aspects of science. My response is that I love science despite this “ugliness”, and perhaps even because of its “ugliness”. The fact that scientific knowledge is dynamic and ephemeral, the fact that we do not need to feel embarrassed about our ignorance and uncertainties, the fact that science is conducted by humans and is infused with human failings, these are all reasons to love science. When I think of science, I am reminded of the painting “Basket of Fruit” by Caravaggio, which is a still-life of a fruit bowl, but unlike other still-life paintings of fruit, Caravaggio showed discolored and decaying leaves and fruit. The beauty and ingenuity of Caravaggio’s painting lies in its ability to show fruit how it really is, not the idealized fruit baskets that other painters would so often depict.
The challenge that we scientists face is to share our love for science despite its imperfections with those around us who do not actively work in the field of science. I remember speaking to a colleague of mine in the context of a wonderful spoof of a Lady Gaga song called “Bad Project”. We both agreed that the spoof was spot on, showing frustrations of a PhD student not being able to get experiments to work, having to base experiments on poorly documented lab note books and the tedious nature of scientific work. My colleague was concerned that if such spoofs ridiculing laboratory work became too common, it would embolden the American anti-science movement that is already very strong. Anyone who closely follows American science politics knows that creationists and global-warming deniers are constantly looking for opportunities to find any flaws in scientific studies and that they use rare occasional errors as opportunities to suggest that well-established and replicated scientific results or theories should be discarded. In addition to the agenda of these specific anti-science interest groups, there are also many groups lobbying for severe budget cuts, many which would negatively impact US research funding, which is already at an alarmingly low level.
My response to these concerns is that it is our job as scientists to convince fellow citizens how important science is, despite its limitations and flaws. The fact that scientists recognize the uncertainties and limitations of scientific knowledge is not a weakness, but a strength of the scientific approach and makes it ideally suited to help us understand our world. Enabling a false mystique of science as being definitive and perfect is not going to benefit science or society in the long run. Instead, recognizing our failings and limitations in science and openly discussing them with our fellow citizens is going to help us improve how we conduct science. I think that anyone who carefully looks at Caravaggio’s “imperfect” painting eventually sees its beauty and falls in love with it. I hope that we scientists will be able to share the Caravaggio view of science with the general public.
Image Credits: Painting Basket of Fruit by Caravaggio via Wikimedia Commons
Ecology’s Image Problem
“There are tories in science who regard imagination as a faculty to be avoided rather than employed. They observe its actions in weak vessels and are unduly impressed by its disasters” —John Tyndall, 1870
In his 1881 essay on Mental Imagery, Francis Galton noted that few Fellows of the Royal Society or members of the French Institute, when asked to do so, could imagine themselves sitting at the breakfast-table from which presumably they had only recently arisen. Members of the general public, women especially, fared much better, being able to conjure up vivid images of themselves enjoying their morning meal. From this Galton, an anthropologist, noted polymath, and eugenicist, concluded that learned men, bookish men, relying as they do on abstract thought, depend on mental images little, if at all.
In this rejection of the scientific role for the imagination Galton was in disagreement with Irish physicist John Tyndall who in a 1870 address to the British Association in Liverpool entitled The Scientific Use of the Imagination claimed that in explaining sensible phenomena, scientists habitually form mental images of that which is beyond the immediately sensible. "Newton’s passage from a falling apple to a falling moon”, Tyndall wrote, “was, at the outset, a leap of the prepared imagination.” The imagination, Tyndall claimed, is both the source of poetic genius and an instrument of discovery in science.
The role of the imagination is chemistry, is well enough known. In 1890 the German Chemical Society celebrated the discovery by Friedrich August Kekulé von Stradonitz of the structure of benzene, a ring-shaped aromatic hydrocarbon. At this meeting Kekulé related that the structure of benzene came to him as a reverie of a snake seizing its own tail (the ancient symbol called the Ouroboros).
Since this is quite a celebrated case of the scientific use of the imagination I quote Kekule’s account of the events in full:
“During my stay in Ghent, Belgium, I occupied pleasant bachelor quarters in the main street. My study, however, was in a narrow alleyway and had during the day time no light. For a chemist who spends the hours of daylight in the laboratory this was no disadvantage. I was sitting there engaged in writing my text-book; but it wasn't going very well; my mind was on other things. I turned my chair toward the fireplace and sank into a doze. Again the atoms were flitting before my eyes. Smaller groups now kept modestly in the background. My mind's eye, sharpened by repeated visions of a similar sort, now distinguished larger structures of varying forms. Long rows frequently close together, all, in movement, winding and turning like serpents! And see! What was that? One of the serpents seized its own tail and the form whirled mockingly before my eyes. I came awake like a flash of lightning. This time also [he had had fruitful dreams before] I spent the remainder of the night working out the consequences of the hypothesis. If we learn to dream, gentlemen, then we shall perhaps find truth…” Berichte der deutschen chemischen Gesellsehaft, 1890, 1305-1307 (in Libby 1922).
In supporting his argument about the positive role of the imagination John Tyndall quoted Sir Benjamin Brodie, the chemist, who wrote that the imagination (”that wondrous faculty”) when it is “properly controlled by experience and reflection, becomes the noblest attribute of man”. Brodie cautioned, however, that the imagination when “left to ramble uncontrolled, leads us astray into a wilderness of perplexities and errors…”
The philosopher Vigil Aldrich provided an interesting example of how imagination could be a hindrance to science. Sir Arthur Stanley Eddington, the English astrophysicist, referred frequently, according to Aldrich, to “the world outside us”. Consciousness, in contrast, can be described as being “inside of us.” Using such images Eddington was, said Aldrich, “under the spell of the telephone-exchange analogy.” Where the nerve ending leave off the world beyond us takes over. If the telephone exchange image seems ill-chosen, the image, after all, could be worse. One might imagine inner consciousness as a submarine and from our berth within it we come to know the outside world by means of a periscope! Now, Eddington did not use this image (others did) but when we try to make sense of it we can do so only by saying that inner consciousness is like a submarine only when one supposes that it is nothing at all like a submarine. One must “tone down the analogy” to make it useful. If you do otherwise “the lively imagination begins to protest”. Aldrich speculated that theorists persists with inept picture-making because when toned down, it often appeared as if the image is illuminating even when it is not. Moreover, a flashy image is entertaining. Thus one can easily make the “pleasant mistake” of identifying the image with the “real meaning” of an assertion.
A strength of environmental disciplines is that they bring into proximity bodies of knowledge that are often set apart. Though some quibble with him on this, historian of ecology Donald Worster places both Charles Darwin, the philosophical scientist and Henry David Thoreau the scientific philosopher at the ground of ecology as a natural scientific discipline. And though it is fair to say that ecology has maintained an identity largely separate from the environmentalisms it has inspired, nevertheless ecology and environmentalisms have been good conversation partners. Both have listened to an admirable degree to its poets, artists and philosophers. A good thing this may be in many ways, but my contention here is that the environmental sciences and the practices associated with them — environmentalisms like sustainability — are prone of taking their most arresting images too literally. I wonder if there is not in environmental thought a pathology of the imagination? Too readily, it seems, we transform a provocative image into a proven hypothesis; we smuggle ancient and baffling worldviews into contemporary conceptions of nature.
I sketch a few examples here to illustrate the case. Perhaps you will have ones that you can add.
Nature as an Organism
You are justified in calling Nature your Mother if you have a mother who wants you dead. A Mother who inculcated both your limitations and your accomplishments. Nature: A Mother who birthed a world equipped with tooth and nail and hungry eye; whose family tie is the ripping of flesh. Why, I wonder, are we quick to demand of God an explanation of evil but incline less to asking that question of Mother Nature?
To call Nature our mother is just one manifestation of the image of the Earth as organism. It is enduring, compelling and surely wrong-footing.
University of Wisconsin historian Frank N. Egerton traces the myth of cosmos as organism back to Plato. Timaeus asked “In the likeness of what animal did the Creator make the world?” He then speculated as follows: “For the Deity, intending to make this world like the fairest and most perfect of intelligible beings, framed one visible animal comprehending within itself all other animals of a kindred nature.” Because of Plato’s fateful influence on the history of western thought, Egerton noted that the implications of this myth have been enduring. According to Egerton the myth is the source of two related concepts “the supraorganismic balance-of-nature concept and the microcosm-macrocosm concept.” The supraorganismic concept views the cosmos as having the attributes of a living thing whereas the microcosm-macrocosm concept takes different parts of the universe to correspond with an organismal body.
Both flavors of the organismal concept get expressed in ecosystem ecology. Natural ecosystems, the influential University of Georgia ecology Eugene Odum asserted, are integrated wholes, and developed in a manner that parallels the development of individual organisms or human societies. The development of the natural systems, ecological succession in other words, is orderly, predictable, and directional. It leads, in Odum’s view of things, to a stabilized ecosystem with predictable ratios of biomass, productivity, respiration and so forth. The “strategy” of ecosystem development, as Odum called it, corresponds to the “strategy” for long-term evolutionary development of the biosphere – “namely, increased control of, or homeostasis with, the physical environment in the sense of achieving maximum protection from its perturbations.” Homeostasis etymologically derives from the Greek “standing-still” and in the sense that Odum meant to imply, indicates a dynamic and regulated stability. In other words, the stability of the organism.
Odum does not stand here accused of covertly importing the organismal image into his work; he was quite explicit about it. There is much to admire in Odum’s work and the ecology that he inspired, but the sense of design and purpose that it implied in nature (what philosophers call teleology) put Odum's ecosystem ecology at loggerheads with contemporary evolutionary theory which insists on the purposelessness of nature. It has taken quite some time to reconcile ecosystem thought with evolutionary theory.
Another example of the superorganism’s baleful influence can be found in the Gaia hypothesis. In his preface to Gaia: A New Look at Life on Earth (1979) Lovelock wrote:
“The concept of Mother Earth or, as the Greeks called her long ago, Gaia, has been widely held throughout history and has been the basis of a belief which still coexists with the great religions."
If the development of James Lovelock and Lynn Margulis’s Gaia hypothesis is anything to go by, hypotheses about the workings of nature derived from the organismal image of nature have a shelf life of a decade or so. Lovelock’s Gaia: A New Look at Life on Earth was published in 1979 and he rescinded the teleological claims of the Gaia hypothesis by 1988 in his book Ages of Gaia — or at least he became attentive to the problems that the superorganism concept created. He still maintains that the Earth’s atmosphere is homeostatically regulated but he admitted to not having been led astray by the sirens of the superorganism.
It is a banality of the ecological sciences to state that everything is connected. That ebullient Scot, and eventual stalwart of the American wilderness movement, John Muir, provided the image. He wrote, "When we try to pick out anything by itself, we find it hitched to everything else in the universe."
And if such statements are employed to sponsor a notion that individual organisms cannot be regarded in isolation from those that they consume, and those that can consume them, or furthermore, that as a consequence of the deep intersections of the living and the never-alive, that there can been unforeseen consequences flowing from species additions or removals from ecosystems, then few may argue with this. However, just as the ripples of a stone dropped in a still pond propagate successfully only to its edges (though they may entrain delightful patterns in the finest of its marginal sands), not every ecological event has intolerably large costs to exact. True, if the dominoes line-up and the circumstances are just so, a butterfly’s wing beat over the Pacific may hurl a typhoon against its shores, but more often than not such lepidopterous catastrophes do not come to pass.
Ecosystems, energized so that matter cycles and conjoins the living with the dead, have their lines of demarcation, borders defined by their internal interactions being more powerful than their external ones. They are therefore buffered against many potentially contagious disasters. This, of course, is the essence of resilience - the capacity of a system to absorb disturbance without disruption to habitual structure and function. Ecology is as much the science investigating the limits of connections as it is the thought that everything is connected.
The Community Concept
Is there a greater 20th Century American environmental thinker than Aldo Leopold? Certainly there few that provided as many genuinely poetic images: in the eyes of a dying wolf he saw “a fierce green fire”, he exhorted us to “think like a mountain”, he depicted the crane as “wilderness incarnate”. For all of that, has Leopold not led us astray, with images associated with of the “ethical sequence”? Leopold’s influential land ethic “enlarges the boundaries of the community concept.” The ethical sequence that he proposed progresses stutteringly from free men, to women, to slaves, to animals, plants, rocks and land. It has a compelling lucidity. Leopold admitted, however, that it seems a little too simple. The ethic invites us into community with the land. A person’s self-image will change under a land ethic: “In short,” Leopold writes “a land ethic changes the role of Homo sapiens from conqueror plain member and citizen of it.”
Now, Leopold is a subtle thinker and knows not to confuse the image with the thing. Certainly he expected this transformation to take quite some time. The land ethic would not emerge without “an internal change in our intellectual emphases, loyalties, affections, and convictions.” Now I have little problem with the image of extending the ethical circle other than noting that it makes it seem easier than it has proven to be. My more serious objection concerns the rather thin notion of community that seems to be implied in Leopold image of the plain citizen. As environmental philosopher William Jordan III has illustrated in his book The Sunflower Forest (2003), missing from Leopold’s account is any acknowledgment of the negative elements of the human experience of community: envy, selfishness, fear, hatred, and shame. As Jordan pointed out this leads Leopold and others to “a sentimental, moralizing philosophy that…insists on the naturalness of humans…but that neglects or downplays the radical difficulty of achieving such a sense of self, and also downplays the role of culture and cultural institutions in carrying out this work.” If Leopold’s image of the community and our place within it is an impoverished one, the work of extending the circle becomes impossible.
There are other images that we might have discussed here. Ones that have had, at times at least, unfortunate implications for environmental thinking. For instance, in 1864 George Perkins Marsh wrote that mankind is disruptive, not just occasionally, mind you, but “is everywhere a disturbing agent.” One hundred years later the Wilderness Act renews the image in the definition of wilderness as an area “untrammeled by man.” We might have considered contemporary accounts of social-ecological systems where these systems are posited as a compound substance, but that in depicting them, we tease the components apart again.
So, if environmental thought and ecological science has been susceptible to what my colleague and friend Professor David Wise of University of Illinois, Chicago, has called “malicious metaphors”, is there a more productive way to think about the role of the image in developing environmental thought?
The work of French philosopher Gaston Bachelard (1884 - 1862) — one of the more lovable of the French phenomenologists, certainly the hairiest — is helpful in sorting out of a productive role for the imagination in science. He was renowned for his work on epistemological issues in science as well as for his phenomenological account of the poetic image, and his philosophical meditation on reverie. As much as he was a materialist in his approach to science, he was subjective and personal (as a matter of theoretical orientation) in his philosophical work on the imagination.
Bachelard’s work on first glance is so inviting. Chapters in his book The Poetics of Space (1958) have enticing titles like The House from Cellar to Garret, Nests, Shells. Perhaps this is why the book is a philosophic bestseller. My copy claims “more than 80,000 copies sold”. And though indeed opening a Bachelard book is like relaxing into a warm bath, nevertheless there is an astringent in those waters. The thought is somewhat obscure as Bachelard ransacks the lexicon of the various disciplines he brings together in his work: Kantian philosophy, Husserlian phenomenology, Jungian psychoanalysis etc. Oftentimes his use of technical terms was novel; reinterpreting them, Bachelard pushed them into new service. Because of this density, I wonder how many of those 80,000 copies have languished on bookshelves? Mine certainly did until the past few weeks.
To enjoy the fruits of Bachelard’s insights we should do at least some of the work of appreciating how he produced them. In the hope that this will embolden you to return to your copy of The Poetics of Space, or other works by Bachelard on the imagination, or pick them up for the first time, I will give a summary, as best I understand it, of what his phenomenology of the image is all about. I am, I should tell you, strictly an amateur Bachelardian.
The poetic image is eruptive for both poet and reader. Bachelard say that for its creation “the flicker of the soul is all that is needed.” So, every great image is its own origin. Famously, Bachelard maintained that the imagination, contrary to view of many philosophical accounts, is “the faculty of deforming images offered by perception.” The poetic image emerges into the consciousness as a direct product of “the heart, soul and being of man.” Elsewhere Bachelard claims “the imagination [is] a major power of the human nature.”
The poetic image is therefore not caught up in a network of causalities. Our first recourse should not be to ask what archetypes an image represents, or what aspects of the poet’s psycho-biography explains it away. In this assertion Bachelard remains true to phenomenology’s maxim of going “back to the things themselves.” In as much as such things are possible, one approaches the poetic image freed from all presuppositions.
So it is of secondary importance to ask where an artistic image comes from; what matters more is to explore what opportunities for freedom an image creates. Instead of cause and effect, at the center point of which we traditionally ask the image to stand, rather we might speak of the “resonances and reverberations” of the image. This is not, I think, just some fanciful softening of language, it is a necessary acknowledgment of the way in which an image does not simply reflect a memory, but rather revives an absent one and the way in which an image explodes into images. When we read the poetic image it resonates, when we communicate it it reverberates. The repercussions of the image, said Bachelard, “invite us to give greater depth to our own existence.” What bearing does an image have on our freedom? A great piece of art, Bachelard says “awakens images that have been effaced, at the same time that it confirms the unforeseeable nature of speech. And if we render speech unforeseeable, is this not an apprenticeship to freedom?”
I propose that Gaston Bachelard’s phenomenological account of the poetic image, despite its somewhat unpromising obscurity, is helpful in addressing environmental thought’s special porousness to striking images. In this short sketch I cannot fully substantiate the claim. I will end, however, with an example where an approach such as Bachelard’s seems to have been fruitful.
Tim Morton is one of the most widely read and exciting environmental writers of recent years. As far as I know has not cited Bachelard as a methodological inspiration, although his work is phenomenological and existential. [Added: One of Morton's earlier books on the representation of the spice trade in Romantc Literature was entitled Poetics of Spice (2006) - making him, it would seem, an explicit Bachelardian after all!]. Morton is so concerned about the potential of sedimented ideas leading us into Sir Benjamin Brodie’s “wilderness of perplexities and errors”, that he elected to drop the term “Nature” altogether. In his book Ecology Without Nature (2007) he explained the problem: “…the idea of nature is getting in the way of properly ecological forms of culture, philosophy, politics, and art.”
The results of Morton’s analysis lead us to strange, perplexing, though ultimately interesting places. Out of this natureless ecology comes a suite of insights on “dark ecology”, an ecology reminding us that we are always already implicated in the ecological. There is no outside from which we get a guilt-free view of the fantastic mess. Deriving also from an ecology developed without a sentimental view of nature comes a fresh analysis of connectedness. Morton revives Muir’s hitching image but this time its resonances are weirder than the oceanic feeling that we are all blissfully in this together. His analysis gives us the queer bestiary of “strange strangers” with which we are stickily intimate, and yet we can never fully get to know. Morton develops this account in The Ecological Thought (2010) which I recommend to you. I am not supposing that this is an adequate summary of Morton’s recent books, but I think that Tim is converging on the idea of resonances and reverberations that Bachelard has written about.
The image, and the imagination, can play a positive role in environmental thinking. Darwin’s image of the “tangled bank” is both a pretty and useful way of thinking about the way in which the organismal profusion developed from a common ancestor. But a misapplied image can be a disaster. Understanding our responsibilities with respect to the image is the work of the future, it is the work that will birth the future.
Walter Libby The Scientific Imagination The Scientific Monthly, Vol. 15, No. 3 (Sep., 1922), pp. 263-270
Monday, January 07, 2013
A Parched Future: Global Land and Water Grabbing
by Jalees Rehman
“This is the bond of water. We know the rites. A man’s flesh is his own; the water belongs to the tribe.” Frank Herbert - Dune
Land grabbing refers to the large-scale acquisition of comparatively inexpensive agricultural land in foreign countries by foreign governments or corporations. In most cases, the acquired land is located in under-developed countries in Africa, Asia or South America, while the grabbers are investment funds based in Europe, North America and the Middle East. The acquisition can take the form of an outright purchase or a long-term-lease, ranging from 25 to 99 years, that gives the grabbing entity extensive control over the acquired land. Proponents of such large-scale acquisitions have criticized the term “land grabbing’ because it carries the stigma of illegitimacy and conjures up images of colonialism or other forms of unethical land acquisitions that were so common in the not so distant past. They point out that land acquisitions by foreign investors are made in accordance with the local laws and that the investments could create jobs and development opportunities in impoverished countries. However, recent reports suggest that these land acquisitions are indeed “land grabs”. NGOs and not-for profit organizations such as GRAIN, TNI and Oxfam have documented the disastrous consequences of large-scale land acquisitions for the local communities. More often than not, the promised jobs are not created and families that were farming the land for generations are evicted from their ancestral land and lose their livelihood. The money provided to the government by the investors frequently disappears into the coffers of corrupt officials while the evicted farmers receive little or no compensation.
One aspect of land grabbing that has received comparatively little attention is the fact that land grabbing is invariably linked to water grabbing. When the newly acquired land is used for growing crops, it requires some combination of rainwater (referred to as “green water”) and irrigation from freshwater resources (referred to as “blue water”). The amount of required blue water depends on the rainfall in the grabbed land. For example, land that is grabbed in a country with heavy rainfalls, such as Indonesia, may require very little irrigation and tapping of its blue water resources. The link between land grabbing and water grabbing is very obvious in the case of Saudi Arabia, which used to be a major exporter of wheat in the 1990s, when there were few concerns about the country’s water resources. The kingdom provided water at minimal costs to its heavily subsidized farmers, thus resulting in a very inefficient usage of the water. Instead of the global average of using 1,000 tons of water per ton of wheat, Saudi farmers used 3,000 and 6,000 tons of water. Fred Pearce describes the depletion of the Saudi water resources in his book The Land Grabbers:
Saudis thought they had water to waste because, beneath the Arabian sands, lay one of the world’s largest underground reservoirs of water. In the late 1970s, when pumping started, the pores of the sandstone rocks contained around 400 million acre-feet of water, enough to fill Lake Erie. The water had percolated underground during the last ice age, when Arabia was wet. So it was not being replaced. It was fossil water— and like Saudi oil, once it is gone it will be gone for good. And that time is now coming. In recent years, the Saudis have been pumping up the underground reserves of water at a rate of 16 million acre-feet a year. Hydrologists estimate that only a fifth of the reserve remains, and it could be gone before the decade is out.
Saudi Arabia responded to this depletion of its water resources by deciding to gradually phase out all wheat production. Instead of growing wheat in Saudi Arabia, it would import wheat from African farmlands that were leased and operated by Saudi investors. This way, the kingdom could conserve its own water resources while using African water resources for the production of the wheat that would be consumed by Saudis.
The recent study “Global land and water grabbing” published in the Proceedings of the National Academy of Sciences (2013) by Maria Rulli and colleagues examined how land grabbing leads to water grabbing and can deplete the water resources of a country. The basic idea is that when the grabbed land is irrigated, the use of freshwater resources reduces the availability of irrigation water for neighboring farmland areas, i.e. the areas that have not been grabbed. This in turn can cause widespread water stress and affect the ability of other farmers to grow crops, ultimately leading to poverty and social unrest. Land grabbing is often shrouded in secrecy since local governments do not want to be perceived as selling off valuable land to foreigners, but some details regarding the size of the land grab are eventually made public. The associated water needs of the investors that grab the land are even less clear and very little is publicly divulged about how the land grabbing will affect the water availability for other farmers. In the case of Sudan, for example, grabbed land is often located on the fertile banks of the Blue Nile and while large-scale commercial farmland is expanding as part of the foreign investments, local farmers are losing access to land and water and gradually becoming dependent on food aid, even though Sudan is a major exporter of food produced by the large-scale farms.
Using the global land grabbing database of GRAIN and the Land Matrix Database, Rulli and colleagues analyzed the extent of land-grabbing and identify the Democratic Republic of Congo (8.05 million hectares), Indonesia (7.14 million hectares), Philippines (5.17 million hectares), Sudan (4.69 million hectares) and Australia (4.65 million hectares) as the five countries in which the most area of land has been grabbed by foreign investors. The total amount of grabbed land in these five countries is 29.7 million hectares, and accounts for nearly 63% of global land grabbing. To put this in perspective, the size of the United Kingdom is 24.4 million hectares.
The researchers calculated the amount of rainfall (green water) on the grabbed land, which is the minimum amount of water that would be grabbed with the acquisition of the land. However, since the grabbed land is also used for agriculture and many crops require additional freshwater irrigation (blue water), the researchers also determined a range of predicted blue water grabbing for land irrigation. For the low end of the blue water grabbing range, the researchers assumed that the land would be irrigated in the same fashion as other agricultural land in the country. On the higher end of the range, the researchers also calculated how much blue water would be grabbed, if the investors irrigated the land in a manner to maximize the agricultural production of the land. This is not an unreasonable assumption, since foreign investors probably do have the financial resources to maximally irrigate the acquired land in a manner that maximizes the return on their investment.
Rulli and colleagues estimated that global land grabbing is associated with the grabbing of 308 billion m3 of green water (i.e. rain water) and an additional grabbing of blue water that can range from 11 billion m3 (current irrigation practices) to 146 billion m3 (maximal irrigation) per year. Again, to put these numbers in perspective, the average daily household consumption of water in the United Kingdom is 150 liters (0.15 m3) per person. This results in a total annual household consumption of 3.5 billion m3 (0.15 m3 X 365 days X 63,181,775 UK population) of water in the UK. Therefore, the total household water consumption in the UK is a fraction of what would be the predicted blue water usage of the grabbed land, even if one were to use very conservative estimates of required irrigation.
The researchers then also list the top 25 countries in which the investors are based that engage in land and water grabbing. They find that about “60% of the total grabbed water is appropriated, through land grabbing, by the United States, United Arab Emirates, India, United Kingdom, Egypt, China, and Israel”. The researchers gloss over the fact that in many cases, land and associated water resources are grabbed by foreign investment groups and not by foreign governments. Just because certain investment funds are based in Singapore, UK or the United Arab Emirates does not mean that these countries are “appropriating” the land or water. In fact, many investment groups that are involved in land grabbing may have multinational investors or investors whose nationality is not disclosed. Nevertheless, there are probably cases in which land and water grabbing are not merely conducted as a form of private investment, but might involve foreign governments. One such example is the above-mentioned case of Saudi Arabia, in which the Saudi government actively encouraged and helped Saudi investors to acquire agricultural land in Africa. While perusing the list of the top 25 countries in which land and water grabbing investors are based, one cannot help but notice that the list contains a number of Middle Eastern countries that are themselves experiencing severe water stress and scarcity, such as Saudi Arabia, Qatar, United Arab Emirates or Israel. Transferring their water burden to Africa by acquiring agricultural land would allow them to preserve their own water resources and may indeed by of strategic value to these countries. However, the precise degree of government involvement in these investment decisions often remains unclear.
The paper by Rulli and colleagues is an important reminder of how land grabbing and water grabbing are entwined and that land grabbing could potentially deplete valuable water resources from under-developed countries, especially in Africa, which accounts for more than half of the globally grabbed land. Even villagers that continue to own and farm their own land adjacent to the large-scale farms on grabbed lands could be affected by new forms of water stress, especially if the foreign investors decide to maximally irrigate the acquired land. There are some key limitations to the study, such as the lack of distinction between private foreign investors or foreign governments that are engaged in land grabbing and the fact that all the calculations of blue water grabbing are based on very broad estimates without solid data on how much blue water is actually consumed by the grabbed lands. These numbers may be very difficult to obtain, but should be the focus of future studies in this area.
After reading this study, I have become far more aware of ongoing land and water grabbing. Excessive commodification of our lives was already criticized by Karl Polanyi in 1944 and now that water is also becoming a “fictitious commodity”, we have to be extremely watchful of its consequences. The extent of land grabbing that has already taken place is quite extensive. An interactive map based on the GRAIN database allows us to visualize the areas in the world that are most affected by land grabbing since 2006 as well as where the foreign investors are located. The map shows that in recent years, Pakistan has emerged as one of the prime targets of land grabbing in Asia, while Sudan, South Sudan, Tanzania and Ethiopia are major targets of recent land grabbing in Africa. The world economic crisis and the recent food price crisis will likely increase the degree of land grabbing and associated water grabbing. The targets of land grabbing are often countries with fragile economies, widespread poverty and significant malnourishment.
As a global society, we have to ensure that people living in these countries do not suffer as a consequence of land grabbing deals. The recent “Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security” released by the FAO are an important step in the right direction, because they attempt to provide food security for all, even when large-scale land acquisitions occur. However, they do not specify water access and they are, as the title reveals, “voluntary”. It is not clear who will abide by them. Therefore, we also need a complementary approach in which clients of land grabbing investment funds ask the fund managers to abide by the FAO guidelines and that they maximally ensure food security and water access for the general population in grabbed lands. One specific example is that of the American retirement fund TIAA-CREF (Teachers Insurance and Annuity Association – College Retirement Equities Fund) which is one of the leading retirement providers for people who work in education, research and medicine. Investment in agriculture and land grabbing appears to be a priority for TIAA-CREF, but American educators or academics that use TIAA-CREF as their retirement fund could use their leverage to ensure socially conscientious investments. Even though land and water grabbing are becoming a major concern, the growing awareness of the problem may also result in solutions that limit the negative impact of land and water grabbing.
Image Credits: Wikimedia - Drought by Tomas Castelazo / Wikimedia - The Union of Earth and Water by Rubens
Monday, December 10, 2012
There Was No Couch: On Mental Illness and Creativity
by Jalees Rehman
The psychiatrist held the door open for me and my first thought as I entered the room was “Where is the couch?”. Instead of the expected leather couch, I saw a patient lying down on a flat operation table surrounded by monitors, devices, electrodes, and a team of physicians and nurses. The psychiatrist had asked me if I wanted to join him during an “ECT” for a patient with severe depression. It was the first day of my psychiatry rotation at the VA (Veterans Affairs Medical Center) in San Diego, and as a German medical student I was not yet used to the acronymophilia of American physicians. I nodded without admitting that I had no clue what “ECT” stood for, hoping that it would become apparent once I sat down with the psychiatrist and the depressed patient.
I had big expectations for this clinical rotation. German medical schools allow students to perform their clinical rotations during their final year at academic medical centers overseas, and I had been fortunate enough to arrange for a psychiatry rotation in San Diego. The University of California (UCSD) and the VA in San Diego were known for their excellent psychiatry program and there was the added bonus of living in San Diego. Prior to this rotation in 1995, most of my exposure to psychiatry had taken the form of medical school lectures, theoretical textbook knowledge and rather limited exposure to actual psychiatric patients. This may have been part of the reason why I had a rather naïve and romanticized view of psychiatry. I thought that the mental anguish of psychiatric patients would foster their creativity and that they were somehow plunging from one existentialist crisis into another. I was hoping to engage in some witty repartee with the creative patients and that I would learn from their philosophical insights about the actual meaning of life. I imagined that interactions with psychiatric patients would be similar to those that I had seen in Woody Allen’s movies: a neurotic, but intelligent artist or author would be sitting on a leather couch and sharing his dreams and anxieties with his psychiatrist.
I quietly stood in a corner of the ECT room, eavesdropping on the conversations between the psychiatrist, the patient and the other physicians in the room. I gradually began to understand that that “ECT” stood for “Electroconvulsive Therapy”. The patient had severe depression and had failed to respond to multiple antidepressant medications. He would now receive ECT, what was commonly known as electroshock therapy, a measure that was reserved for only very severe cases of refractory mental illness. After the patient was sedated, the psychiatrist initiated the electrical charge that induced a small seizure in the patient. I watched the arms and legs of the patients jerk and shake. Instead of participating in a Woody-Allen-style discussion with a patient, I had ended up in a scene reminiscent of “One Flew Over the Cuckoo's Nest”, a silent witness to a method that I thought was both antiquated and barbaric. The ECT procedure did not take very long, and we left the room to let the sedation wear off and give the patient some time to rest and recover. As I walked away from the room, I realized that my ridiculously glamorized image of mental illness was already beginning to fall apart on the first day of my rotation.
During the subsequent weeks, I received an eye-opening crash course in psychiatry. I became acquainted with DSM-IV, the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders which was the sacred scripture of American psychiatry according to which mental illnesses were diagnosed and classified. I learned ECT was reserved for the most severe cases, and that a typical patient was usually prescribed medications such as anti-psychotics, mood stabilizers or anti-depressants. I was surprised to see that psychoanalysis had gone out of fashion. Depictions of the USA in German popular culture and Hollywood movies had led me to believe that many, if not most, Americans had their own personal psychoanalysts. My psychiatry rotation at the VA took place in the mid 1990s, the boom time for psychoactive medications such as Prozac and the concomitant demise of psychoanalysis.
I found it exceedingly difficult to work with the DSM-IV and to appropriately diagnose patients. The two biggest obstacles I encountered were a) determining cause –effect relationships in mental illness and b) distinguishing between regular human emotions and true mental illness. The DSM-IV criteria for diagnosing a “Major Depressive Episode”, included depressive symptoms such as sadness or guilt which were severe enough to “cause clinically significant distress or impairment in social, occupational, or other important areas of functioning”. I had seen a number of patients who were very sad and had lost their job, but I could not determine whether the sadness had impaired their “occupational functioning” or whether they had first lost their job and this had in turn caused profound sadness. Any determination of causality was based on the self-report of patients, and their memories of event sequences were highly subjective.
The distinction between “regular” human emotions and mental illness was another challenge for me and the criteria in the DSM-IV manual seemed so broad that what I would have considered “sadness” was now being labeled as a Major Depression. A number of patients that I saw had severe mental illnesses such as depression, a condition so disabling that they could hardly eat, sleep or work. The patient who had undergone ECT on my first day belonged to that category. However, the majority of patients exhibited only some impairment in their sleep or eating patterns and experienced a degree of sadness or anxiety that I had seen in myself or my friends. I had considered transient episodes of anxiety or unhappiness as part of the spectrum of human emotional experience. The problem I saw with the patients in my psychiatry rotation was these patients were not only being labeled with a diagnosis such as “Major Depression”, but were then prescribed antidepressant medications without any clear plan to ever take them off the medications. By coincidence, that year I met the forensic psychiatrist Ansar Haroun, who was also on faculty at UCSD and was able to help me with my concerns. Due to his extensive work in the court system and his rigorous analysis of mental states for legal proceedings, Haroun was an expert on causality in psychiatry as well the definition of what constitutes a truly pathological mental state.
Regarding the issue of causality, Haroun explained to me the complexity of the mind and mental states makes it extremely difficult to clearly define cause and effect relationships in psychiatry. In infectious diseases, for example, specific bacteria can be identified by laboratory tests as causes of a fever. The fever normally does not precede the bacterial infection nor does it cause the bacterial infection. The diagnosis of mental illnesses, on the other hand, rests on subjective assessments of patients and is further complicated by the fact that there are no clearly defined biological causes or even objective markers of most mental illnesses. Psychiatric diagnoses are therefore often based on patterns of symptoms and a presumed causality. If a patient exhibits symptoms of a depressed mood and has also lost his or her job during that same time period, psychiatrists then have to diagnose whether the depression was the cause of losing the job or whether the job loss caused depressive symptoms. In my limited experience with psychiatry and the many discussions I have had with practicing psychiatrists, it appears that the leeway given to psychiatrists to assess cause-effect relationships may result in an over-diagnosis of mental illnesses or an over-estimation of their impact.
I also learnt from Haroun that the question of how to address the distinction between the spectrum of “regular” human emotions and actual mental illness had resulted in a very active debate in the field of psychiatry. Haroun directed me towards the writings of Tom Szasz, who was a brilliant psychiatrist but also a critic of psychiatry, repeatedly pointing out the limited scientific evidence for diagnoses of mental illness. Szasz’ book “The Myth of Mental Illness” was first published in 1960 and challenged the foundations of modern psychiatry. One of his core criticisms of psychiatry was that his colleagues had begun to over-diagnose mental illnesses by blurring the boundaries between everyday emotions and true diseases. Every dis-ease (discomfort) was being turned into a disease that required a therapy. The reasons for this overreach by psychiatry were manifold, ranging from society and the state trying to regulate what was acceptable or normal behavior to psychiatrists and pharmaceutical companies that would benefit financially from the over-diagnosis of mental illness. An excellent overview of his essays can be found in his book “The Medicalization of Everyday Life”. Even though Tom Szasz passed away earlier this year, psychiatrists and researchers are now increasingly voicing their concerns about the direction that modern psychiatry has taken. Allan Horwitz and Jerome Wakefield, for example, have recently published “The Loss of Sadness: How Psychiatry Transformed Normal Sorrow into Depressive Disorder” and “All We Have to Fear: Psychiatry's Transformation of Natural Anxieties into Mental Disorders”. Unlike Szasz who even went as far as denying the existence of mental illness, Horowitz and Wakefield have taken a more nuanced approach. They accept the existence of true mental illnesses, admit these illnesses can be disabling and acknowledge the patients who are afflicted by mental illnesses do require psychiatric treatment. However, Horowitz and Wakefield criticize the massive over-diagnosis of mental illness and point out the need to distinguish true mental illnesses from normal sadness and anxiety.
Before I started my psychiatry rotation in San Diego, I had been convinced that mental illness fostered creativity. I had never really studied the question in much detail, but there were constant references in popular culture, movies, books and TV shows to the creative minds of patients with mental illness. The supposed link between mental illness and creativity was so engrained in my mind that the word “psychotic” automatically evoked images of van Gogh’s paintings and other geniuses whose creative minds were fueled by the bizarreness of their thoughts. Once I began seeing psychiatric patients who truly suffered from severe disabling mental illnesses, it became very difficult for me to maintain this romanticized view of mental illness. People who truly suffered from severe depression had difficulties even getting out of bed, getting dressed and meeting their basic needs. It was difficult to envision someone suffering from such a disabling condition to be able to write large volumes of poetry or to analyze the data from ground-breaking experiments. The brilliant book “Creativity and Madness: New Findings and Old Stereotypes” by Albert Rothenberg helped me understand that the supposed link between creativity and mental illness was primarily based on myths, anecdotes and a selection bias in which the creative accomplishments of patients with mental illness were glorified and attributed to the illness itself. Geniuses who suffered from schizophrenia or depression were not creative because of their mental illness but in spite of their mental illness.
I began to realize that the over-diagnosis of mental illness and the departure of causality that had become characteristic for contemporary psychiatry also helped foster the myth that mental illness enhances creativity. Many beautiful pieces of literature or art can be inspired by emotional states such as the sadness of unrequited love or the death of a loved one. Creativity is often a response to a state of discomfort or dis-ease, an attempt to seek out comfort. However, if definitions of mental illness are broadened to the extent that nearly every such dis-ease is considered a disease, one can easily fall into the trap of believing that mental illness indeed begets creativity. In respect to establishing causality, Rothenberg found, contrary to the prevailing myth, mental illness was actually a disabling condition that prevented creative minds from completing their artistic or scientific tasks. A few years ago, I came across “Poets on Prozac: Mental Illness, Treatment, and the Creative Process” a collection of essays written by poets who suffer from mental illness. The personal accounts of most poets suggest that their mental illnesses did not help them write their poetry, but actually acted as major hindrances. It was only when their illness was adequately treated and they were in a state of remission that they were able to write poems. A recent comprehensive analysis of studies that attempt to link creativity and mental illness can be found in the excellent textbook “Explaining Creativity: The Science of Human Innovation” by Keith Sawyer, who concludes that there is no scientific evidence for the claim that mental illness promotes creativity. He also points to a possible origin of this myth:
The mental illness myth is based in cultural conceptions of creativity that date from the Romantic era, as a pure expression of inner inspiration, an isolated genius, unconstrained by reason and convention.
I assumed that the myth had finally been laid to rest, but, to my surprise I came across the headline Creativity 'closely entwined with mental illness' on the BBC website in October 2012. The BBC story was referring to the large-scale Swedish study “Mental illness, suicide and creativity: 40-Year prospective total population study” by Simon Kyaga and his colleagues at the Karolinska Institute, published online in the Journal of Psychiatric Research. The BBC news report stated “Creativity is often part of a mental illness, with writers particularly susceptible, according to a study of more than a million people” and continued:
Lead researcher Dr Simon Kyaga said the findings suggested disorders should be viewed in a new light and that certain traits might be beneficial or desirable.
For example, the restrictive and intense interests of someone with autism and the manic drive of a person with bipolar disorder might provide the necessary focus and determination for genius and creativity.
Similarly, the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece.
These statements went against nearly all the recent scientific literature on the supposed link between creativity and mental illness and once again rehashed the tired, romanticized myth of the mentally ill genius. I was puzzled by these claims and decided to read the original paper. There was the additional benefit of learning more about the mental health of Swedes, because my wife is a Swedish-American. It never hurts to know more about the mental health or the creative potential of one’s spouse.
Kyaga’s study did not measure creativity itself, but merely assessed correlations between self-reported “creative professions” and the diagnoses of mental illness in the Swedish population. Creative professions included scientific professions (primarily scientists and university faculty members) as well as artistic professions such as visual artists, authors, dancers and musicians. The deeply flawed assumption of the study was that if an individual has a “creative profession”, he or she has a higher likelihood of being a creative person. Accountants were used as a “control”, implying that being an accountant does not involve much creativity. This may hold true for Sweden, but the creativity of accountants in the USA has been demonstrated by the recent plethora of financial scandals. The size of the Kyaga study was quite impressive, involving over one million patients and collecting data on the relatives of patients. The fact that Sweden has a total population of about 9.5 million and that more than one million of its adult citizens are registered in a national database as having at least one mental illness is both remarkable and worrisome.
The main outcome was the likelihood that patients with certain mental illnesses such as depression, schizophrenia or anxiety disorders were engaged in a “creative profession”. The results of the study directly contradicted the BBC hyperbole:
We found no positive association between psychopathology and overall creative professions except for bipolar disorder. Rather, individuals holding creative professions had a significantly reduced likelihood of being diagnosed with schizophrenia, schizoaffective disorder, unipolar depression, anxiety disorders, alcohol abuse, drug abuse, autism, ADHD, or of committing suicide.
Not only did the authors fail to find a positive correlation between creative professions and mental illnesses (with the exception of bipolar disorder), they actually found the opposite of what they had suspected: Patients with mental illnesses were less likely to engage in a creative profession.
Their findings do not come as a surprise to anyone who has been following the scientific literature on this topic. After all, the disabling features of mental illness make it very difficult to maintain a creative profession. Kyaga and colleagues also presented a contrived subgroup analysis, to test whether there was any group within the “creative professions” that showed a positive correlation with mental illness. It appears contrived, because they only break down the artistic professions, but did not perform a similar analysis for the scientific professions. Among all these subgroup analyses, the researchers found a positive correlation between the self-reported profession ‘author’ and a number of mental illnesses. However, they also found that other artistic professions did not show such a positive correlation.
How the results of this study gave rise to the blatant misinterpretation reported by the BBC that “the disordered thoughts associated with schizophrenia might spark the all-important originality element of a masterpiece” is a mystery in itself. It shows the power of the myth of the mad genius and how myths and convictions can tempt us to misinterpret data in a way that maintains the mythic narrative. The myth may also be an important component in the attempt to medicalize everyday emotions. The notion that mental illness fosters creativity could make the diagnosis more palatable. You may be mentally ill, but don’t worry, because it might inspire you to paint like van Gogh or write poems like Sylvia Plath.
A study of the prevalence of mental illness published in the Archives of General Psychiatry in 2005 estimated that roughly half of all Americans will have been diagnosed with a mental illness by time they reach the age of 75. This estimate was based on the DSM-IV criteria for mental illness, but the newer DSM-V manual will be released in 2013 and is likely to further expand the diagnosis of mental illness. The DSM-IV criteria had made allowance for bereavement to avoid diagnosing people who were profoundly sad after the loss of a loved one with the mental illness depression. This bereavement exemption will likely be removed from the new DSM-V criteria so that the diagnosis of major depression can be used even during the grieving period. The small group of patients who are afflicted with disabling mental illness do not find their suffering to be glamorous. There is a large number of patients who are experiencing normal sadness or anxiety and end up being inappropriately diagnosed with mental illness using broad and lax criteria of what constitutes an illness. Are these patients comforted by romanticized myths about mental illness? The continuing over-reach of psychiatry in its attempt to medicalize emotions, supported by the pharmaceutical industry that reaps large profits from this over-reach, should be of great concern to all of society. We need to wade through the fog of pseudoscience and myths to consider the difference between dis-ease and disease and the cost of medicalizing human emotions.
Image Credit: Wikimedia Commons Public Domain ECT machine (1960s) by Nasko and Self-Portait of van Gogh.
Monday, August 20, 2012
The Rats of War: Konrad Lorenz and the Anthropic Shift
What we might remember most about the London 2012 Olympics are the medal ceremonies. The proud, the tearful, the exhausted, the awestruck, the lip-syncing, and occasionally the unimpressed. We might also call to mind the relative equanimity with which silver and bronze medalists tolerated the national anthems of the winning nation. Nobel laureate Konrad Lorenz (1903-1989), an Austrian zoologist and co-founder with Niko Tinbergen of the field of ethology – the biology of behavior – remarked in his popular book On Aggression (1966) that the Olympic Games are the only occasion when the playing of the anthem of another nation does not arouse hostility. Athletic ideals of fair play and chivalry, he said, balance out national enthusiasm. Olympic sports, you see, have all the virtues of war without all that unpleasant killing and plundering and, importantly, without aggravating international hatred. To surrogate for war, Olympic sports should be as dangerous as possible and should call for a measure of self-sacrifice. This being the case, one wonders why jousting is not an Olympic sport. Perhaps NBC simply chose not to screen it.
The destructive intensity of the aggressive drive that propels us to war is mankind’s hereditary evil, as Lorenz termed it, and its evolutionary origins can be sought in tribal conflict. In the early Stone Age intra-tribal skirmishes would have paid out some evolutionary dividends: dispersion of the population, the selection of the strong and especially in the defense of the brood. But in more contemporary times having overcome our most immediate environmental limitations, that is, not for the most part starving or being prey items, and now that we are equipped with weapons, a more dangerous, indeed an “evil” intra-specific selection prevails. What was once healthy for the species in the form of an instinctive behavior called “militant enthusiasm” has now turned pathological.
Lorenz’s analysis was based upon a lifetime studying a variety of animals, though he is especially known for his bird work. Together with Tinbergen and other classical ethologists he proposed several important hypotheses: behaviors come in constellations of instinctive activities called fixed action patterns; these get released by specific stimuli; the behaviors should be regarded as adaptive response shaped evolutionary forces; the adoption of certain behaviors can be phase specific occurring at certain life stages – for instance, imprinting where young Graylag goslings instinctively mimic their parents, even if the parent is substituted by Lorenz himself! When in 1973 Konrad Lorenz, Niko Tinbergen and Karl von Frisch were awarded the Nobel Prize in Medicine and Physiology for the development of ethology it was recognized that they had created a new science. However, in addition to shedding light of the behavior of lower animals it had implications for “social medicine, psychiatry, and psychosomatic medicine”. If this new discipline had no conceivable bearing on an understanding of the human condition, it is unlikely that the ethologists would have had won a Nobel Prize.
Ethology’s shift from a basic zoological discipline to an applied one was not without controversy among its practitioners, some of whom wanted to restrict it to fundamentals for a more extended period. However, there is, it seems, a special, apparently inevitable, moment in works on animal behavior where the author switches from their account of chimps, bees, fishes, geese, rats or another favored organism and tells us what it means to be human. I call this the anthropic shift. The behavior of the human animal need not be an area of particular expertise for the author; the switch is presumed to be validated by the evolutionary continuity of humans with other animals.
An inclination toward an anthropic shift is anticipated in the work of Charles Darwin. Although the implications of natural selection for humans occupied Darwin for some time before the publication of On the Origin of Species (1859), nevertheless humans are scarcely mentioned in that volume. It took Darwin more than a decade before publishing his version of the anthropic shift which he eventually did in The Descent of Man, and Selection in Relation to Sex (1871) and in The Expression of the Emotions in Man and Animals (1872). One could call this the classic anthropic shift – the author waits a respectful period of time before pronouncing on human affairs.
There are some early attempts in Lorenz’s work to make the implications of his work on the specific behavior of specific organisms apparent for humans including infamously his attempts to reconcile his science with the aims of National Socialism (which I discuss here). It is in Lorenz’s On Aggression, the work of his maturity, where there is a full flowering of his thoughts on human behavior and misbehavior. Although this book is dominated by observations of other animals Lorenz reserves the final chapters of On Aggression for his assessment of human affairs. This version of writing the anthropic shift – the succinct but confident summary of the implications of the study of other animals for human affairs – is characteristic of our age where the scientist has lost all bashfulness in opining on human nature.
In what follows I summarize Lorenz’s diagnosis of the human condition, our current predicament and the remedies he suggested grounded in ethological principles. In the Lorenzian anthropic shift he is attentive to our aggressive tendencies especially the instinctive behavior that he calls militant enthusiasm. If the lessons learned from an ethological inspection of lower animals are correctly applied we might just be able to avert a global catastrophe. Some time soon, no doubt.
An unbiased observer from another planet reflecting on human behavior from a perch close enough to capture the broad strokes of human conduct, but far enough away not to sweat the details of our separate behaviors would surmise that we are rats. Or so Lorenz concluded in On Aggression. The extraterrestrial would infer this based upon the observations that both rats and humans are “social and peaceful beings within their clans, but veritable devils towards all fellow-members of their species not belonging to their own communities.” Our Martian would have more optimism about the future of rats than humans, says Lorenz, since rats stop reproducing when a state-of overcrowding is reached. We do not.
Lorenz provided an edifying, if somewhat chilling, account of rat group-on-group violence, much of which seemingly was worked out in experimental arenas. The work is mainly from one F Steiniger and summarized by Lorenz. Steiniger found that when rats were introduced into an enclosure, aggression grew incrementally after a period of wariness. Once pair formation between male and female rats occurred violence escalated and within a couple of weeks a mated couple typically killed all other residents. Death often came to a rat in the form of peritoneal sepsis – a rat dies of multitude of suppurating cuts. That being said, a skilled rat can deftly inflict a nip on the carotid artery. Exhaustion and nervous-overstimulation leading to adrenal gland disruption were another leading cause of death among beleaguered rats.
The basis of most groups of rats are genetically related families – rat mothers, rat fathers, rat grandparents, rat siblings and rat cousins all getting along with mutual accord. Tender and considerate are rats to members of their family group. Larger animals will, for example, “good humouredly allow smaller one to take pieces of food away from them.” In matters of reproduction they’ll generously step aside and let “half- and three-quarter grown animals…take precedence of the adults.” An intruder, however, is not treated so solicitously and they are routed rapidly and killed by bites. Since rats identify family members by smell, the experimenter can manipulate the odor of an animal and turn a beloved family member into a threatening intruder. Grandpa had never been so bewildered. In one such experiment Lorenz assured the reader, though with a note of apology to the biologist who one supposes will want to view the spectacle to its ghastly end, that the experimental animal was spared his fate and removed into protective custody.
On viewing humans and rats Lorenz’s extraterrestrial may find these species indistinguishable because aspects of their social behavior are so head-scratchingly difficult to fathom. Group hatred between rat-clans and the human appetite for war seem inexplicable viewed functionally. Because of the difficulty in deriving a evolutionary explanation for rat-on-rat attacks from the perspective of natural selection Lorenz obliquely speculated that rat-clan gang fights are the outcome of sexual selection (selection based on differential mating success) where there is “grave danger that members of a species may in demented competition drive each other into the most stupid blind alley of evolution.” But Lorenz is equivocal here, conceding that unknown external factors may still at work. “It is quite possible”, he concluded, that “group hate between rat-clans is really a diabolical invention which serves no good purpose.” That being said, he seems more confident that human group loyalty and generosity arose from tribal conflict. That rat and human tribes evolved cooperative tactics in the face of intra-group conflict, a group selection argument, has fallen out of favor with evolutionary biologists and is the basis for some of the criticism leveled at Lorenz. “The trouble with these books [the books of Lorenz and some other ethologists]”, Richard Dawkins fulminated in The Selfish Gene (1976), “is that their authors got it totally and utterly wrong because they misunderstood how evolution works”.
Humanity’s greatest paradox is that those gifts which we treasure above all others, our braininess and our capacity for speech, are the ones which may bring about our extinction. We have, says Lorenz been driven “out of the paradise in which [we] could follow [our] instincts with impunity.” Our evolutionarily derived capacity for culture confers on humans a facility for rapid change. What we gained with this capacity outstripped the limited injunctions we have against employing this capacity in those circumstances when we should not. Our aggravated competence in mayhem – aggression against others and destruction of the environment, is not sufficiently kept in check. A centerpiece of Lorenz’s claim, one that he repeats in several books, is that species which in the ordinary course of matters have a limited capacity to inflict damage on conspecifics have a correspondingly feeble inhibition against killing. When a dove is trapped with another dove it has no phylogenetically derived compunction against gouging its peaceful neighbor to death. So it is with humans and their rapidly evolving capacity for mischief. We are like a dove that “suddenly acquired the beak of a raven”. We don’t know how to turn the killer off, because we’ve never really had to before.
Lorenz may not have been to first to formulate the thesis that although we are certainly of nature, subject to the same evolutionary laws as other species, we are yet spat out of nature as a consequence of the forces of cultural flexibility. Paul Sears, the American ecologist, wrote in a similar vein in the late 1950s: “With the cultural devices of fire, clothing, shelter, and tools [Man] was able to do what no other organism could do without changing its original character. Cultural change was, for the first time, substituted for biological evolution as a means of adapting an organism to new habitats in a widening range that eventually came to include the whole earth.”
Now the human aptitude for carnage may have swollen beyond the easy reaches of our inhibitions but that does not mean that such moral inhibitions do not exist. Nor does it mean that we cannot amplify them. Balancing our aggression against others is our capacity for love and forbearance within the clan. What Lorenz has in mind is not to coolly rational morality of a Kantian categorical imperative. (Lorenz was, by the by, one of the inheritors of Kant’s professorial chair in University of Königsberg.) The love of which Lorenz speaks is a phylogenetically inherited moral regard for one another. The fate of humanity, Lorenz said, rests on whether this instinct can cope with “its growing burden.”
Manning the defensive walls alongside moral responsibility is our “phylogenetically programmed” love for custom. Institutionalized ritual and custom acts like a skeleton around which a culture develops. Specific rituals are passed from generation to generation. Of course, custom can be irrational and may misfire as it does in the case of “jeering at a fat boy” (Lorenz’s example). Grosser errors still can arise from customs associated with warrior culture, adaptive at one time but obsolete in present ecological and sociological times.
Lorenz cautioned against unconsidered elimination of cultural components, even in the case of “mild reciprocal head hunting” (apparently Margaret Mead’s term). This is because culture develops as an integrated whole. What assembles together sunders – so goes the theory. A possible source of cultural unraveling comes from the mixing of cultures. This was an argument that Lorenz had insisted up since the 1930s when he first pronounced it in a publication calculated to show a resonance between his work and National Socialism. At the time of receiving the Nobel Prize he apologized for his naivety, an apology that satisfied some colleague but certainly not all. The argument remained intact in On Aggression. But in addition to the temptations to deliberately remove unfortunate culture attributes, elements of culture were unraveling as Lorenz saw it under the influence of break in the traditional intergenerational transmission of information. He dates an especially major shift to about 1900. After this kids stopped listening to parents and teachers.
A detailed examination of the case of militant enthusiasm is the centerpiece of Lorenz’s anthropic shift. Enthusiasm, for short, is “a specialized form of communal aggression”, but this behavior interacts with culturally ritualized activities and thus may be controlled by rational insight. In other words, there is nothing we can do to ablate enthusiasm from our behavioral repertoire – the eye may still mist during the National Anthem but Olympiads disincline to jump each other. In fact, this is the nub of the matter: aggression is rooted so deep that it attaches to those things most dear to us. The conclusion from this is that man (Lorenz wrote at a time when “man” stood in unblushingly for all of humankind) was Janus-headed with an evolutionary endowed potential to commit to all sorts of noble things, but meanwhile will readily dispatch his brother for the sake of these same values.
Lorenz’s solutions to the problems of aggression, set out so elaborately in On Aggression, are disarmingly simple; banal, in fact, is his word for them. So simple that one senses that he worried that one might not, after all, have needed all that ethological labor to propose them. There are four solutions: Know thyself, ethologically; cathartically sublimate the aggressive (and libidinous) drives; promote international friendship; and most importantly channel of militant enthusiasm into just causes. En passant, he advises against mere suppression of instincts since aggression builds up hydraulically (an analogy in Lorenz that links him to Sigmund Freud); it cannot long be controlled. You may be glad to learn that eugenic planning is excluded as highly inadvisable. He is also enthusiastic about the role of humor in puncturing the pretensions of those who might lead us along false paths (“we do not as yet take humour seriously enough”).
In his roster of solutions international sport figures prominently as an opportunity to discharge aggressive instincts. The discharge of that particular form of aggression, militant enthusiasm, can be achieved by redeploying them to causes as diverse as civil rights, the prevention of war (though not, admittedly as appealing as war itself), and in the “three great enterprises” of art, science, and medicine.
Lorenz ended On Aggression on a note of optimism. “I believe”, he wrote, “that reason can and will exert a selection pressure in the right direction. I believe that this, in the not too distant future, will endow our descendents with the faculty of fulfilling the greatest and most beautiful of all commandments.”
In 1975 when E. O. Wilson published his groundbreaking and controversial book Sociobiology: The New Synthesis he predicted that ethology would simply be subsumed by sociobiology, behavioral ecology, neurophysiology, and psychology. In fact, by time the ethologists won their Nobel Prize in 1973 the phase of classical ethology was over. So many of the foundation concepts of Lorenz and Tinbergen had fallen into disuse that later in his life a note of exasperation crept into Lorenz’s writing. Thus the apparatus with which Lorenz reached his conclusion was considered largely unnecessary by the contemporary students of human behavior.
This does not mean that Lorenz was wrong. Few biologists might contradict a conclusion that aggression has an instinctive component and that an evolutionary understanding of aggression can contribute to solutions. Nor might many be averse to learning about the nature of war from rats. Nevertheless, extending ethology to humans with a confidence seen in Lorenz’s work might strike many as hubristic. Indeed, it is clear that Niko Tinbergen thought so, and he remained more modest in his claims. But at the end of the day all anthropic shifts may be hubristic, even if such claims are accompanied by that most charming cousin of hubris: unbounded optimism.
It may be apparent to some readers of this piece that there exists an extravagant parallel between Lorenz’s On Aggression and E O Wilson’s new book The Social Conquest of Earth (2012). Like many writers of the anthropic shift both have an expertise in “lower organisms” (Wilson famously is an ant guy); both invoke a group selection hypothesis to explain altruism and loyalty within human tribes; both think that the aggression that leads to war are our hereditary curse (Wilson) or evil (Lorenz); both think that the better and lesser aspects of our natures are at war with one another; both have invoked the wrath of Richard Dawkins in almost identical fashion; both have unbridled optimism about the future, if only we listen to them. This is not the place to explore these similarities though I encourage you to read both books, and if you care to join us in conversation about them (see here).
The anthropic shift, the compulsion to draw upon evolutionary insights from other organisms to bring to bear on the human condition, is solid it seems to me, and both Lorenz and Wilson have important things to say. Nevertheless, the zoological approach taken alone, without insights from humanistic disciplines, or from the social sciences that are committed directly to the study of humans, or from the arts, offers us quite little. After all global events since Lorenz wrote On Aggression suggest that his formula was either unheeded or unworkable on scale that matches the immensity of our problems. Wilson seems to acknowledge this, and makes enthusiastic noises about interdisciplinarity while also noting that pure philosophy has “abandoned the foundational questions about human existence. The responses from both within and beyond his academic discipline nevertheless seem aggressively hostile to his latest attempt to save humankind. Jousting never looked more lethal.
[Note: I was given a copy of On Aggression by my mother as a requested Christmas gift when I was 19. It has, therefore, taken me 30 years to write about it. At this rate I'll have a piece of writing on Infinite Jest in 2042].
 It’s been pointed out that doves do not in fact behavior as Lorenz repeatedly asserted that they do, that is, torture a neighbor to death when that unfortunate neighbor can not escape.
 Sears PB. 1957 The Ecology of Man. [Oregon State System of Higher Education, Condon Lectures.] Eugene, OR: University of Oregon Press.