Monday, March 30, 2015
STEM Education Promotes Critical Thinking and Creativity: A Response to Fareed Zakaria
by Jalees Rehman
All obsessions can be dangerous. When I read the title "Why America's obsession with STEM education is dangerous" of Fareed Zakaria's article in the Washington Post, I assumed that he would call for more balance in education. An exclusive focus on STEM (science, technology, engineering and mathematics) is unhealthy because students miss out on the valuable knowledge that the arts and humanities teach us. I would wholeheartedly agree with such a call for balance because I believe that a comprehensive education makes us better human beings. This is the reason why I encourage discussions about literature and philosophy in my scientific laboratory. To my surprise and dismay, Zakaria did not analyze the respective strengths of liberal arts education and STEM education. Instead, his article is laced with odd clichés and misrepresentations of STEM.
Misrepresentation #1: STEM teaches technical skills instead of critical thinking and creativity
If Americans are united in any conviction these days, it is that we urgently need to shift the country's education toward the teaching of specific, technical skills. Every month, it seems, we hear about our children's bad test scores in math and science — and about new initiatives from companies, universities or foundations to expand STEM courses (science, technology, engineering and math) and deemphasize the humanities.
"The United States has led the world in economic dynamism, innovation and entrepreneurship thanks to exactly the kind of teaching we are now told to defenestrate. A broad general education helps foster critical thinking and creativity."
Zakaria is correct when he states that a broad education fosters creativity and critical thinking but his article portrays STEM as being primarily focused on technical skills whereas liberal education focuses on critical thinking and creativity. Zakaria's view is at odds with the goals of STEM education. As a scientist who mentors Ph.D students in the life sciences and in engineering, my goal is to help our students become critical and creative thinkers.
Students learn technical skills such as how to culture cells in a dish, insert DNA into cells, use microscopes or quantify protein levels but these technical skills are not the focus of the educational program. Learning a few technical skills is easy but the real goal is for students to learn how to develop innovative scientific hypotheses, be creative in terms of designing experiments that test those hypotheses, learn how to be critical of their own results and use logic to analyze their experiments.
My own teaching and mentoring experience focuses on STEM graduate students but the STEM programs that I have attended at elementary and middle schools also emphasize teaching basic concepts and critical thinking instead of "technical skills". The United States needs to promote STEM education because of the prevailing science illiteracy in the country and not because it needs to train technically skilled worker bees. Here are some examples of science illiteracy in the US: Fort-two percent of Americans are creationists who believe that God created humans in their present form within the last 10,000 years or so. Fifty-two percent of Americans are unsure whether there is a link between vaccines and autism and six percent are convinced that vaccines can cause autism even though there is broad consensus among scientists from all over the world that vaccines do NOT cause autism. And only sixty-one percent are convinced that there is solid evidence for global warming.
A solid STEM education helps citizens apply critical thinking to distinguish quackery from true science, benefiting their own well-being as well as society.
Zakaria's criticism of obsessing about test scores is spot on. The subservience to test scores undermines the educational system because some teachers and school administrators may focus on teaching test-taking instead of critical thinking and creativity. But this applies to the arts and humanities as well as the STEM fields because language skills are also assessed by standardized tests. Just like the STEM fields, the arts and humanities have to find a balance between teaching required technical skills (i.e. grammar, punctuation, test-taking strategies, technical ability to play an instrument) and the more challenging tasks of teaching students how to be critical and creative.
Misrepresentation #2: Japanese aren't creative
Zakaria's views on Japan are laced with racist clichés:
"Asian countries like Japan and South Korea have benefitted enormously from having skilled workforces. But technical chops are just one ingredient needed for innovation and economic success. America overcomes its disadvantage — a less-technically-trained workforce — with other advantages such as creativity, critical thinking and an optimistic outlook. A country like Japan, by contrast, can't do as much with its well-trained workers because it lacks many of the factors that produce continuous innovation."
Some of the most innovative scientific work in my own field of scientific research – stem cell biology – is carried out in Japan. Referring to Japanese as "well-trained workers" does not do justice to the innovation and creativity in the STEM fields and it also conveniently ignores Japanese contributions to the arts and humanities. I doubt that the US movie directors who have re-made Kurosawa movies or the literary critics who each year expect that Haruki Murakami will receive the Nobel Prize in Literature would agree with Zakaria.
Misrepresentation #3: STEM does not value good writing
Writing well, good study habits and clear thinking are important. But Zakaria seems to suggest that these are not necessarily part of a good math and science education:
"No matter how strong your math and science skills are, you still need to know how to learn, think and even write. Jeff Bezos, the founder of Amazon (and the owner of this newspaper), insists that his senior executives write memos, often as long as six printed pages, and begins senior-management meetings with a period of quiet time, sometimes as long as 30 minutes, while everyone reads the "narratives" to themselves and makes notes on them. In an interview with Fortune's Adam Lashinsky, Bezos said: "Full sentences are harder to write. They have verbs. The paragraphs have topic sentences. There is no way to write a six-page, narratively structured memo and not have clear thinking."
Communicating science is an essential part of science. Until scientific work is reviewed by other scientists and published as a paper it is not considered complete. There is a substantial amount of variability in the quality of writing among scientists. Some scientists are great at logically structuring their papers and conveying the core ideas whereas other scientific papers leave the reader in a state of utter confusion. What Jeff Bezos proposes for his employees is already common practice in the STEM world. In preparation for scientific meetings and discussions, scientists structure their ideas into outlines for manuscripts or grant proposals using proper paragraphs and sentences. Well-written scientific manuscripts are highly valued but the overall quality of writing in the STEM fields could be greatly improved. However, the same probably also holds true for people with a liberal arts education. Not every philosopher is a great writer. Decoding the human genome is a breeze when compared to decoding certain postmodern philosophical texts.
Misrepresentation #4: We should study the humanities and arts because Silicon Valley wants us to.
In support of his arguments for a stronger liberal arts education, Zakaria primarily quotes Silicon Valley celebrities such as Steve Jobs, Mark Zuckerberg and Jeff Bezos. The article suggests that a liberal arts education will increase entrepreneurship and protect American jobs. Are these the main reasons for why we need to reinvigorate liberal arts education? The importance of a general, balanced education makes a lot of sense to me but is increased job security a convincing argument for pursuing a liberal arts degree? Instead of a handful of anecdotal comments by Silicon Valley prophets, I would prefer to see some actual data that supports Zakaria's assertion. But perhaps I am being too STEMy.
There is a lot of room to improve STEM education. We have to make sure that we strive to focus on the essence of STEM which is critical thinking and creativity. We should also make a stronger effort to integrate arts and humanities into STEM education. In the same vein, it would be good to incorporate more STEM education into liberal arts education in order to combat scientific illiteracy. Instead of invoking "Two Cultures" scenarios and creating straw man arguments, educators of all fields need to collaborate in order to improve the overall quality of education.
Monday, March 23, 2015
You're on the Air!
by Carol A. Westbrook
The excitement of a live TV broadcast...a breaking news story...a presidential announcement...an appearance of the Beatles on Ed Sullivan. These words conjure up a time when all America would tune in to the same show, and families would gather round their TV set to watch it together.
This is not how we watch TV anymore. It is watched at different times and on different devices, from mobile phones, computers, mobile devices, from previously recorded shows on you DVR, or via streaming service such as Netflix and, soon, Apple. Live news can be viewed on the web, via cell phone apps, or as tweets. An increasing number of people are foregoing TV completely to get news and entertainment from other sources, with content that is never "on the air." (see the chart,below, from the Nov 24, 2013 Business Insider). Many Americans don't even own a television set!
We take it for granted that we will have instant access to video content--whether digital or analog, television, cell phone or iPad. But video itself has its roots in television; the word itself means, "to view over a distance." The story of TV broadcasting is a fascinating one about technology development, entrepreneurship, engineering, and even space exploration. It is an American story, and it is a story worth telling.
At first, America was tuned in to radio. From the early 20's through the 1940s, people would gather around their radios to listen to music and variety shows, serial dramas, news, and special announcements. Yet they dreamed of seeing moving pictures over the airwaves, like they did in newsreels and movies. A series of technical breakthroughs were needed to make this happen.
The first important breakthrough was the invention in 1938 of a way to send and view moving images electronically--Farnsworth's "television." Thus followed a series of patent wars, but at the end of the day, we had television sets which could be used to view moving pictures transmitted by the airwaves. In 1939, RCA televised the opening of the New York Worlds Fair, including a speech by the first President to appear on TV, President Franklin D. Roosevelt. There were few televisions to watch it on, though, until after the end of World War II, when America's demand for commercial television rapidly increased.
This led to the next big advance in television--network broadcasting. The big radio broadcast companies such as RCA (Radio Corporation of America) and CBS (Columbia Broadcasting System) naturally expanded into this media, but their infrastructure was limited. Though the frequencies used for AM radio transmission, from 540 to 1780 kHz (kHz means cycles per second) can travel long distances from their transmitting stations, each wavelength can only carry a limited amount of signal energy; in other words, it has a narrow bandwidth. Much higher frequency wavelengths, in the megahertz range (million cycles per second) are required for television so they can carry the additional information needed for picture as well as sound. As a result there was a scramble for higher frequency wavelengths, which was mediated by the FCC (Federal Communications Commission), the entity that regulates broadcasting. In 1948 the FCC allocated the higher frequency bands, designating which ones would be reserved for radio, and which ones for television, and and assigned channel numbers to the TV bands. The VHF television channels were designated 2 - 13. Channel 1 was reallocated to public and emergency communications, which explains why your TV starts with Channel 2! Several higher frequencies, designated as UHF, were reserved for later TV use, including channels 32 to 70. The FCC also froze the number of station licenses at 108 in 1948.
Because the number of broadcast stations was limited, TV was available only if you lived within range of a broadcast network, primarily CBS, NBC or ABC. In other words, if you lived a large city--New York, Chicago, Washington, Philadelphia, Boston, Los Angeles, Seattle or Salt Lake City. Outside of these areas, you might have a chance if you lived on a hill, put up a very high antenna, and prayed for a thermal inversion or a charged ionosphere to propagate the short signal to your television. My husband Rick, an electrical engineer and amateur radio buff, recounts that he watched the coronation of Queen Elizabeth in 1952 from his TV set in a small town in Pennsylvania, due to an environmental quirk (sunspots?), but everyone else had to wait for the films to cross the Atlantic and be shown on their local station.
Yet, for those of us who lived in a prime location, there was an ever-expanding number of programs to watch, such as the Texaco Star Theater, the Milton Bearle Show, and a variety of news shows. Many of us grew up on Howdy Doody, or shows created locally and televised live. I recall walking home from grade school for lunch as a child in Chicago, spending an hour watching "Lunchtime Little Theater," before returning to school to finish the afternoon's lessons! Many of these early shows have been lost, as they were never recorded, and video had not yet been invented.
Television broadcasting eventually went nationwide, thanks to microwave transmission, which developed out of WWII radar. This technology was used to relay television broadcasts to local affiliate stations, which could then broadcast them on their regular channels in the local area. Microwaves use point-to-point transmission, from one microwave tower to the next, and microwave towers were constructed to span the continent. The FCC increased the number of television station licenses, and the broadcast companies truly became "networks." Finally, everyone could watch the same shows at the same time.
But TV was still limited geographically--it could not cross the ocean. This problem was not solved until the third important technology was developed, that of satellite broadcasting. Sputnik, the first space satellite, was launched in 1957. Five years later, July 23, 1962, the first satellite-based transatlantic broadcast took place using the Telstar satellite to relay TV signals from the US ground station in Andover, Maine, to the receiving stations in Goonhilly Downs, England and Pleumeur-Bodou, France.
It's fun to watch this broadcast, which was introduced by Walter Cronkite, and began with a split screen showing the Statue of Liberty on the left and the Eiffel tower on the Right. The satellite transmission was followed a live broadcast of an ongoing baseball game in Chicago's Wrigley Field between the Philadelphia Phillies and the Chicago Cubs, and also included live remarks from President Kennedy, as well as footage from Cape Canaveral, Florida, Seattle, and Canada. I've included a short clip of the Kennedy broadcast.
If you looked up at the night in 1962, you might see the Telstar satellite zoom across your backyard sky. It took about 20 minutes to traverse, passing every 2.5 hours. Broadcast signals could only be transmitted to Telstar and back to land stations on either side of the Atlantic only during this 20-minute transit time, so the tracking satellite dishes had to be fast-moving; they also had to be very large to capture such a weak signal. It is impressive to see the massive size of the dishes in these satellite ground stations, and, and to imagine how quickly they had to move to sweep the sky. This picture of Goonhilly Downs gives you an idea of their size.
Although Telstar demonstrated that satellite transmission was possible for long-range broadcasting, the equipment and precision needed for tracking a rapidly-moving low-earth satellite was onerous. So the space scientists at NASA and Bell Labs launched the next generation of satellites, named "Syncom," into high earth orbit at just the right distance from the earth so that their speed matched the speed of the earth's rotation. When orbiting directly above the equator, the Syncom satellites appeared to be stationery over a single geographic location. Thus, the geostationary (or geosynchronous) satellite was born.
Stationery satellites paved the way for a tremendous expansion in telecommunications, and are still in widespread use. Satellites enabled the rise of cable TV networks such as HBO and CNN in the 1970s, which broadcast without having to go through FCC-regulated television transmitting stations. Instead, their programming was sent via satellite to the cable service, and from there selected programs went by cable to the TV of paid subscribers. These stations could also be accessed through Satellite TV subscription, such as Galaxy, which broadcast them directly to their customers' satellite dishes. Because early satellites could only carry a limited number of cable channels, multiple satellites had to be accessed to provide the purchased programming. Moveable satellite dishes of about four to twelve feet in diameter were positioned in the subscriber's yards or on their roof. Satellite TV further expanded American's access to television, reaching rural communities that had limited (or no) cable service and poor antenna reception; they also provided special paid programming, such as sports events watched at bars. This picture shows a 10-foot moveable dish in my yard in Indiana.
Stationery TV dishes--such as Direct TV antennas--were not feasible until satellites were able to carry more programming, so the dish could stay parked on only one geosynchronous satellite. The technical advance which allowed this was the development of digital video, in the late 1990's. Digital video would eventually displace analog-- remember when the DVD was introduced, which rendered VCRs obsolete in just a few years' time? Each genosynchronous satellites could now carry many more simultaneous channels than before, since each channel takes up only a small fraction of the bandwidth when compared to analog signals. Digital signals also increased the capacity of traditional TV, broadcast from ground towers, which eventually transferred to the HDTV standards, which broadcast at the high capacity UHF frequencies. The transition to HDTV was completed in June 2009, and the TV networks abandoned analog transmission on the old VHF channels, though many of the newer stations carry the old numbers (2 - 13). TV viewers are surprised to learn that they can watch their favorite channels on the newer HDTV sets using only a simple indoor antenna, and many are giving up their pricey cable services. Digital video signals were ready for growth in other media, as they theoretically be transmitted over the internet or by cell phone, and could be stored easily for re-broadcast.
Yet one more step was needed before widespread internet and cellular-based video could occur, allowing us to watch television programs as we do now. This was not a technical advance but an economic one--the sharp drop in the price of computer memory, which happened about 2009. Prior to that, computers had a lot less memory and storage capacity. Perhaps you remember the agony of trying to watch a YouTube video in its early years? Or of waiting for your browser to load? Now we take it for granted that we can view digitized images, create them, share them, watch pre-recorded programs, and record on our TIVO from multiple sources. There seems to be no limit to the ways that we can enjoy television, truly viewing "pictures at a distance." It is a far cry from the early years of television that many of us still remember, when we all watched a small, black-and-white screen with poor sound, to watch John, Paul, George and Ringo sing "I Love You." Now those were the days!
Thanks to my husband Rick Rikoski, for his patient and helpful explanations of the technology of television and its early development.
Monday, March 02, 2015
Does Thinking About God Increase Our Willingness to Make Risky Decisions?
by Jalees Rehman
There are at least two ways of how the topic of trust in God is broached in Friday sermons that I have attended in the United States. Some imams lament the decrease of trust in God in the age of modernity. Instead of trusting God that He is looking out for the believers, modern day Muslims believe that they can control their destiny on their own without any Divine assistance. These imams see this lack of trust in God as a sign of weakening faith and an overall demise in piety. But in recent years, I have also heard an increasing number of sermons mentioning an important story from the Muslim tradition. In this story, Prophet Muhammad asked a Bedouin why he was leaving his camel untied and thus taking the risk that this valuable animal might wander off and disappear. When the Bedouin responded that he placed his trust in God who would ensure that the animal stayed put, the Prophet told him that he still needed to first tie up his camel and then place his trust in God. Sermons referring to this story admonish their audience to avoid the trap of fatalism. Just because you trust God does not mean that it obviates the need for rational and responsible action by each individual.
It is much easier for me to identify with the camel-tying camp because I find it rather challenging to take risks exclusively based on the trust in an inscrutable and minimally communicative entity. Both, believers and non-believers, take risks in personal matters such as finance or health. However, in my experience, many believers who make a risky financial decision or take a health risk by rejecting a medical treatment backed by strong scientific evidence tend to invoke the name of God when explaining why they took the risk. There is a sense that God is there to back them up and provide some security if the risky decision leads to a detrimental outcome. It would therefore not be far-fetched to conclude that invoking the name of God may increase risk-taking behavior, especially in people with firm religious beliefs. Nevertheless, psychological research in the past decades has suggested the opposite: Religiosity and reminders of God seem to be associated with a reduction in risk-taking behavior.
Daniella Kupor and her colleagues at Stanford University have recently published the paper "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" which takes a new look at the link between invoking the name of God and risky behaviors. The researchers hypothesized that reminders of God may have opposite effects on varying types of risk-taking behavior. For example, risk-taking behavior that is deemed ‘immoral' such as taking sexual risks or cheating may be suppressed by invoking God, whereas taking non-moral risks, such as making risky investments or sky-diving, might be increased because reminders of God provide a sense of security. According to Kupor and colleagues, it is important to classify the type of risky behavior in relation to how society perceives God's approval or disapproval of the behavior. The researchers conducted a variety of experiments to test this hypothesis using online study participants.
One of the experiments involved running ads on a social media network and then assessing the rate of how often the social media users clicked on slightly different wordings of the ad texts. The researchers ran the ads 452,051 times on accounts registered to users over the age of 18 years residing in the United States. The participants either saw ads for non-moral risk-taking behavior (skydiving), moral risk-taking behavior (bribery) or a control behavior (playing video games) and each ad came either in a 'God version' or a standard version.
Here are the two versions of the skydiving ad (both versions had a picture of a person skydiving):
God knows what you are missing! Find skydiving near you. Click here, feel the thrill!
You don't know what you are missing! Find skydiving near you. Click here, feel the thrill!
The percentage of users who clicked on the skydiving ad in the ‘God version' was twice as high as in the group which saw the standard "You don't know what you are missing" phrasing! One explanation for the significantly higher ad success rate is that "God knows…." might have struck the ad viewers as being rather unusual and piqued their curiosity. Instead of this being a reflection of increased propensity to take risks, perhaps the viewers just wanted to find out what was meant by "God knows…". However, the response to the bribery ad suggests that it isn't just mere curiosity. These are the two versions of the bribery ad (both versions had an image of two hands exchanging money):
Learn How to Bribe!
God knows what you are missing! Learn how to bribe with little risk of getting caught!
Learn How to Bribe!
You don't know what you are missing! Learn how to bribe with little risk of getting caught!
In this case, the ‘God version' cut down the percentage of clicks to less than half of the standard version. The researchers concluded that invoking the name of God prevented the users from wanting to find out more about bribery because they consciously or subconsciously associated bribery with being immoral and rejected by God.
These findings are quite remarkable because they suggest that a a single mention of the word ‘God' in an ad can have opposite effects on two different types of risk-taking, the non-moral thrill of sky-diving versus the immoral risk of taking bribes.
Clicking on an ad for a potentially risky behavior is not quite the same as actually engaging in that behavior. This is why the researchers also conducted a separate study in which participants were asked to answer a set of questions after viewing certain colors. Participants could choose between Option 1 (a short 2 minute survey and receiving an additional 25 cents as a reward) or Option 2 (four minute survey, no additional financial incentive). The participants were also informed that Option 1 was more risky with the following label:
Eye Hazard: Option 1 not for individuals under 18. The bright colors in this task may damage the retina and cornea in the eyes. In extreme cases it can also cause macular degeneration.
In reality, neither of the two options was damaging to the eyes of the participants but the participants did not know this. This set-up allowed the researchers to assess the likelihood of the participants taking the risk of potentially injurious light exposure to their eyes. To test the impact of God reminders, the researchers assigned the participants to read one of two texts, both of which were adapted from Wikipedia, before deciding on Option 1 or Option 2:
Text used for participants in the control group:
"In 2006, the International Astronomers' Union passed a resolution outlining three conditions for an object to be called a planet. First, the object must orbit the sun; second, the object must be a sphere; and third, it must have cleared the neighborhood around its orbit. Pluto does not meet the third condition, and is thus not a planet."
Text used for the participants in the ‘God reminder' group:
"God is often thought of as a supreme being. Theologians have described God as having many attributes, including omniscience (infinite knowledge), omnipotence (unlimited power), omnipresence (present everywhere), and omnibenevolence (perfect goodness). God has also been conceived as being incorporeal (immaterial), a personal being, and the "greatest conceivable existent."
As hypothesized by the researchers, a significantly higher proportion of participants chose the supposedly harmful Option 1 in the ‘God reminder' group (96%) than in the control group (84%). Reading a single paragraph about God's attributes was apparently sufficient to lull more participants into the risk of exposing their eyes to potential harm. The overall high percentage of participants choosing Option 1 even in the control condition is probably due to the fact that it offered a greater financial reward (although it seems a bit odd that participants were willing to sell out their retinas for a quarter, but maybe they did not really take the risk very seriously).
A limitation of the study is that it does not provide any information on whether the impact of mentioning God was dependent on the religious beliefs of the participants. Do ‘God reminders' affect believers as well atheists and agnostics or do they only work in people who clearly identify with a religious tradition? Another limitation is that even though many of the observed differences between the ‘God condition' and the control conditions were statistically significant, the actual differences in numbers were less impressive. For example, in the sky-diving ad experiment, the click-through rate was about 0.03% in the standard ad and 0.06% in the ‘God condition'. This is a doubling but how meaningful is this doubling when the overall click rates are so low? Even the difference between the two groups who read the Wikipedia texts and chose Option 1 (96% vs. 84%) does not seem very impressive. However, one has to bear in mind that all of these interventions were very subtle – inserting a single mention of God into a social media ad or asking participants to read a single paragraph about God.
People who live in societies which are suffused with religion such as the United States or Pakistan are continuously reminded of God, whether they glance at their banknotes, turn on the TV or take a pledge of allegiance in school. If the mere mention of God in an ad can already sway some of us to increase our willingness to take risks, what impact does the continuous barrage of God mentions have on our overall risk-taking behavior? Despite its limitations, the work by Kupor and colleagues provides a fascinating new insight on the link between reminders of God and risk-taking behavior. By demonstrating the need to replace blanket statements regarding the relationship between God, religiosity and risk-taking with a more subtle distinction between moral and non-moral risky behaviors, the researchers are paving the way for fascinating future studies on how religion and mentions of God influence human behavior and decision-making.
Kupor DM, Laurin L, Levav J. "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" Psychological Science(2015) doi: 10.1177/0956797614563108
Monday, February 02, 2015
Literature and Philosophy in the Laboratory Meeting
by Jalees Rehman
Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group's recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one's peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.
During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.
My research group studies stem cells and tissue engineering, cellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the "big picture" of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.
This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a "teaser" without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren't any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term "culture of science" broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.
One member of our group introduced us to "For Whom the Bell Tolls" by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about "Sycamore Row" by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about "A Tale for a Time Being" by Ruth Ozeki and the difficulties that the book's protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was "Far From the Tree: Parents, Children and the Search for Identity" by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the "Deafness" chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being "disabled" or being "differently abled" and the challenges that parents and children face.
On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn's concept of paradigm shifts in "The Structure of Scientific Revolutions", Karl Popper's principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn's or Karl Popper's ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn't always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or "just" a technological advance?
It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.
Monday, January 05, 2015
Typical Dreams: A Comparison of Dreams Across Cultures
by Jalees Rehman
But I, being poor, have only my dreams;
I have spread my dreams under your feet;
Tread softly because you tread on my dreams.
William Butler Yeats – from "Aedh Wishes for the Cloths of Heaven"
Have you ever wondered how the content of your dreams differs from that of your friends? How about the dreams of people raised in different countries and cultures? It is not always easy to compare dreams of distinct individuals because the content of dreams depends on our personal experiences. This is why dream researchers have developed standardized dream questionnaires in which common thematic elements are grouped together. These questionnaires can be translated into various languages and used to survey and scientifically analyze the content of dreams. Open-ended questions about dreams might elicit free-form, subjective answers which are difficult to categorize and analyze. Therefore, standardized dream questionnaires ask study subjects "Have you ever dreamed of . . ." and provide research subjects with a list of defined dream themes such as being chased, flying or falling.
Dream researchers can also modify the questionnaires to include additional questions about the frequency or intensity of each dream theme and specify the time frame that the study subjects should take into account. For example, instead of asking "Have you ever dreamed of…", one can prompt subjects to focus on the dreams of the last month or the first memory of ever dreaming about a certain theme. Any such subjective assessment of one's dreams with a questionnaire has its pitfalls. We routinely forget most of our dreams and we tend to remember the dreams that are either the most vivid or frequent, as well as the dreams which we may have discussed with friends or written down in a journal. The answers to dream questionnaires may therefore be a reflection of our dream memory and not necessarily the actual frequency of prevalence of certain dream themes. Furthermore, standardized dream questionnaires are ideal for research purposes but may not capture the complex and subjective nature of dreams. Despite these pitfalls, research studies using dream questionnaires provide a fascinating insight into the dream world of large groups of people and identify commonalities or differences in the thematic content of dreams across cultures.
The researcher Calvin Kai-Ching Yu from the Hong Kong Shue Yan University used a Chinese translation of a standardized dream questionnaire and surveyed 384 students at the University of Hong Kong (mostly psychology students; 69% female, 31% male; mean age 21). Here are the results:
Ten most prevalent dream themes in a sample of Chinese students according to Yu (2008):
- Schools, teachers, studying (95%)
- Being chased or pursued (92 %)
- Falling (87 %)
- Arriving too late, e.g., missing a train (81 %)
- Failing an examination (79 %)
- A person now alive as dead (75%)
- Trying again and again to do something (74%)
- Flying or soaring through the air (74%)
- Being frozen with fright (71 %)
- Sexual experiences (70%)
The most prevalent theme was "Schools, teachers, studying". This means that 95% of the study subjects recalled having had dreams related to studying, school or teachers at some point in their lives, whereas only 70% of the subjects recalled dreams about sexual experiences. The subjects were also asked to rank the frequency of the dreams on a 5-point scale (0 = never, 1=seldom, 2= sometimes, 3= frequently, 4= very frequently). For the most part, the most prevalent dreams were also the most frequent ones. Not only did nearly every subject recall dreams about schools, teachers or studying, this theme also received an average frequency score of 2.3, indicating that for most individuals this was a recurrent dream theme – not a big surprise in university students. On the other hand, even though the majority of subjects (57%) recalled dreams of "being smothered, unable to breathe", its average frequency rating was low (0.9), indicating that this was a rare (but probably rather memorable) dream.
How do the dreams of the Chinese students compare to their counterparts in other countries?
Michael Schredl and his colleagues used a similar questionnaire to study the dreams of German university students (nearly all psychology students; 85% female, 15% male; mean age 24) with the following results:
Ten most prevalent dream themes in a sample of German students according to Schredl and colleagues (2004):
- Schools, teachers, studying (89 %)
- Being chased or pursued (89%)
- Sexual experiences (87 %)
- Falling (74 %)
- Arriving too late, e.g., missing a train (69 %)
- A person now alive as dead (68 %)
- Flying or soaring through the air (64%)
- Failing an examination (61 %)
- Being on the verge of falling (57 %)
- Being frozen with fright (56 %)
There is a remarkable overlap in the top ten list of dream themes among Chinese and German students. Dreams about school and about being chased are the two most prevalent themes for Chinese and German students. One key difference is that dreams about sexual experiences are recalled more commonly among German students.
Tore Nielsen and his colleagues administered a dream questionnaire to students at three Canadian universities, thus obtaining data on an even larger study population (over 1,000 students).
Ten most prevalent dream themes in a sample of Canadian students according to Nielsen and colleagues (2003):
- Being chased or pursued (82 %)
- Sexual experiences (77 %)
- Falling (74 %)
- Schools, teachers, studying (67 %)
- Arriving too late, e.g., missing a train (60 %)
- Being on the verge of falling (58 %)
- Trying again and again to do something (54 %)
- A person now alive as dead (54 %)
- Flying or soaring through the air (48%)
- Vividly sensing . . . a presence in the room (48 %)
It is interesting that dreams about school or studying were the most common theme among Chinese and German students but do not even make the top-three list among Canadian students. This finding is perhaps also mirrored in the result that dreams about failing exams are comparatively common in Chinese and German students, but are not found in the top-ten list among Canadian students.
At first glance, the dream content of German students seems to be somehow a hybrid between those of Chinese and Canadian students. Chinese and German students share a higher prevalence of academia-related dreams, whereas sexual dreams are among the most prevalent dreams for both Canadians and Germans. However, I did notice an interesting aberrancy. Chinese and Canadian students dream about "Trying again and again to do something" – a theme which is quite rare among German students. I have simple explanation for this (possibly influenced by the fact that I am German): Germans get it right the first time which is why they do not dream about repeatedly attempting the same task.
The strength of these three studies is that they used similar techniques to assess dream content and evaluated study subjects with very comparable backgrounds: Psychology students in their early twenties. This approach provides us with the unique opportunity to directly compare and contrast the dreams of people who were raised on three continents and immersed in distinct cultures and languages. However, this approach also comes with a major limitation. We cannot easily extrapolate these results to the general population. Dreams about studying and school may be common among students but they are probably rare among subjects who are currently holding a full-time job or are retired. University students are an easily accessible study population but they are not necessarily representative of the society they grow up in. Future studies which want to establish a more comprehensive cross-cultural comparison of dream content should probably attempt to enroll study subjects of varying ages, professions, educational and socio-economic backgrounds.
Despite its limitation, the currently available data on dream content comparisons across countries does suggest one important message: People all over the world have similar dreams.
Yu, Calvin Kai-Ching. "Typical dreams experienced by Chinese people." Dreaming 18.1 (2008): 1-10.
Nielsen, Tore A., et al. "The Typical Dreams of Canadian University Students." Dreaming 13.4 (2003): 211-235.
Schredl, Michael, et al. "Typical dreams: stability and gender differences." The Journal of psychology 138.6 (2004): 485-494.
Monday, December 08, 2014
Heat not Wet: Climate Change Effects on Human Migration in Rural Pakistan
by Jalees Rehman
In the summer of 2010, over 20 million people were affected by the summer floods in Pakistan. Millions lost access to shelter and clean water, and became dependent on aid in the form of food, drinking water, tents, clothes and medical supplies in order to survive this humanitarian disaster. It is estimated that at least $1.5 billion to $2 billion were provided as aid by governments, NGOs, charity organizations and private individuals from all around the world, and helped contain the devastating impact on the people of Pakistan. These floods crippled a flailing country that continues to grapple with problems of widespread corruption, illiteracy and poverty.
The 2011 World Disaster Report (PDF) states:
In the summer of 2010, giant floods devastated parts of Pakistan, affecting more than 20 million people. The flooding started on 22 July in the province of Balochistan, next reaching Khyber Pakhtunkhwa and then flowing down to Punjab, the Pakistan ‘breadbasket'. The floods eventually reached Sindh, where planned evacuations by the government of Pakistan saved millions of people.
However, severe damage to habitat and infrastructure could not be avoided and, by 14 August, the World Bank estimated that crops worth US$ 1 billion had been destroyed, threatening to halve the country's growth (Batty and Shah, 2010). The floods submerged some 7 million hectares (17 million acres) of Pakistan's most fertile croplands – in a country where farming is key to the economy. The waters also killed more than 200,000 head of livestock and swept away large quantities of stored commodities that usually fed millions of people throughout the year.
The 2010 floods were among the worst that Pakistan has experienced in recent decades. Sadly, the country is prone to recurrent flooding which means that in any given year, Pakistani farmers hope and pray that the floods will not be as bad as those in 2010. It would be natural to assume that recurring flood disasters force Pakistani farmers to give up farming and migrate to the cities in order to make ends meet. But a recent study published in the journal Nature Climate Change by Valerie Mueller at the International Food Policy Research Institute has identified the actual driver of migration among rural Pakistanis: Heat.
Mueller and colleagues analyzed the migration and weather patterns in rural Pakistan from 1991-2012 and found that flooding had a modest to insignificant effect on migration whereas extreme heat was clearly associated with migration. The researchers found that bouts of heat wiped out a third of the income derived through farming! In Pakistan, the average monthly rural household income is 20,000 rupees (roughly $200), which is barely enough to feed a typical household consisting of 6 or 7 people. It is no wonder that when heat stress reduces crop yields and this low income drops by one third, farming becomes untenable and rural Pakistanis are forced to migrate and find alternate means to feed their family. Mueller and colleagues also identified the group that was most likely to migrate: rural farmers who did not own the land they were farming. Not owning the land makes them more mobile, but compared to the land-owners, these farmers are far more vulnerable in terms of economic stability and food security when a heat wave hits. Migration may be the last resort for their continued survival.
It is predicted that the frequency and intensity of heat waves will increase during the next century. Research studies have determined that global warming is the major cause of heat waves, and an important recent study by Diego Miralles and colleagues published in Nature Geoscience has identified a key mechanism which leads to the formation of "mega heat waves". Dry soil and higher temperatures work as part of a vicious cycle, reinforcing each other. The researchers found that drying soil is a critical component.. During daytime, high temperatures dry out the soil. The dry soil traps the heat, thus creating layers of high temperatures even at night, when there is no sunlight. On the subsequent day, the new heat generated by sunlight is added on to the "trapped heat" by the dry soil, which creates an escalating feedback loop with progressively drying soil that becomes devastatingly effective at trapping heat. The result is a massive heat-wave which can wipe out crops, lead to water scarcity and also causes thousands of deaths.
The study by Mueller and colleagues provides important information on how climate change is having real-world effects on humans today. Climate change is a global problem, affecting humans all around the world, but its most severe and immediate impact will likely be borne by people in the developing world who are most vulnerable in terms of their food security. There is an obvious need to limit carbon emissions and thus curtail the progression of climate change. This necessary long-term approach to climate change has to be complemented by more immediate measures that help people cope with the detrimental effects of climate change by, for example, exploring ways to grow crops that are more heat resilient, and ensuring the food security of those who are acutely threatened by climate change.
As Mueller and colleagues point out, the floods in Pakistan have attracted significant international relief efforts whereas increasing temperatures and heat stress are not commonly perceived as existential threats, even though they can be just as devastating. Gradual increases in temperatures and heat waves are more insidious and less likely to be perceived as threats, whereas powerful images of floods destroying homes and personal narratives of flood survivors clearly identify floods as humanitarian disasters. The impacts of heat stress and climate change, on the other hand, are not so easily conveyed. Climate change is a complex scientific issue, relying on mathematical models and intrinsic uncertainties associated with these models. As climate change progresses, weather patterns will become even more erratic, thus making it even more challenging to offer specific predictions.
Climate change research and the translation of this research into pragmatic precautionary measures also face an uphill battle because of the powerful influence of the climate change denial lobby. Climate change deniers take advantage of the scientific complexity of climate change, and attempt to paralyze humankind in terms of climate change action by exaggerating the scientific uncertainties. In fact, there is a clear scientific consensus among climate scientists that human-caused climate change is very real and is already destroying lives and ecosystems around the world.
Helping farmers adapt to climate change will require more than financial aid. It is important to communicate the impact of climate change and offer specific advice for how farmers may have to change their traditional agricultural practices. A recent commentary in Nature by Tom Macmillan and Tim Benton highlighted the importance of engaging farmers in agricultural and climate change research. Macmillan and Benton pointed out that at least 10 million farmers have taken part in farmer field schools across Asia, Africa and Latin America since 1989 which have helped them gain knowledge and accordingly adapt their practices.
Pakistan will hopefully soon engage in a much-needed land reform in order to solve the social injustice and food insecurity that plagues the country. Five percent of large landholders in Pakistan own 64% of the total farmland, whereas 65% small farmers own only 15% of the land. About 67% of rural households own no land. Women own only 3% of the land despite sharing in 70% of agricultural activities! The land reform will be just a first step in rectifying social injustice in Pakistan. Involving Pakistani farmers – men and women alike - in research and education about innovative agricultural practices in the face of climate change will help ensure their long-term survival.
Mueller, Valerie, Clark Gray, and Katrina Kosec. "Heat stress increases long-term human migration in rural Pakistan." Nature Climate Change 4, no. 3 (2014): 182-185.
Monday, December 01, 2014
Do I Look Fat in These Genes?
by Carol A. Westbrook
Are you pleasantly plump? Rubinesque? Chubby? Weight-challenged? Or, to state it bluntly, just plain fat? Have you spent a lifetime being nagged to stop eating, start exercising and lose some weight? Have you been accused of lack of willpower, laziness, watching too much TV, overeating and compulsive behavior? If you are among the 55% of Americans who are overweight, take heart. You now have an excuse: blame it on your genes.
It seems obvious that obesity runs in families; fat people have fat children, who produce fat grandchildren. Scientific studies as early as the 1980's suggested that there was more to it than merely being overfed by fat, over-eating parents; the work suggested that fat families may be that way because they have genes in common. Dr. Albert J Stunkard, a pioneering researcher at the University of Pennsylvania who died this year, did much of this early work. Stunkard showed that the weight of adopted children was closer to that of their biologic parents than of their adoptive parents. Another of his studies investigated twins, and found that identical twins--those that had the same genes--had very similar levels of obesity, whereas the similarity between non-identical twins was no greater than that between their non-twin siblings. It was pretty clear to scientists by this time that there was likely to be one or more genes that determined your level of obesity.
In spite of the compelling evidence, it has been difficult to identify the actual genes that cause us to be overweight. This is due partly to the fact that lifestyle and environment are such strong influences on our weight that they can obscure the genetic effects, making it difficult to dissociate genetic from environmental effects. But the main reason it has been difficult to find the fat gene is because there is probably not just one gene for obesity, as is the case for other diseases such as ALS (Lou Gehrig's disease). There seem to be many forms of obesity, determined by an as yet unknown number of genes, so finding an individual gene is like looking for a needle in a haystack.
Earlier this year, a group of researchers succeeded in identifying one of these genes by focusing on a single form of obesity and studying only a small number of families. Their studies, published in the New England Journal of Medicine, reported a gene mutation which was shared by all of the obese members of the families. The mutated gene, DYRK1B, seems to be involved in initiating the growth of fat cells, and in moderating the effects of insulin. The people in these families who carried the gene mutation all had abdominal obesity beginning in childhood, severe hypertension, type 2 diabetes, and high blood triglyceride levels. They had a type of obesity known as "metabolic syndrome."
Metabolic syndrome is recognized by doctors as a combination of symptoms, including large waist size, high triglycerides (lipids), low LDL "good" cholesterol, high blood pressure, and high blood sugar. In order to meet the diagnosis of metabolic syndrome, you need to have any 3 of these 5 criteria. A person who has metabolic syndrome is five times as likely to develop diabetes, and twice as likely to develop heart disease, as someone who doesn't have it.
Metabolic syndrome is not a rare condition; in fact, it has been estimated that as many as 47 million Americans have it, though usually not as severely as the one carried by the families in the study, above. Many more Americans may actually carry a mutation in the DYRK1B gene, or in a related gene, but have not developed the symptoms... yet.
What is perplexing is why obesity continues to be on the increase in the US, despite the fact that our genetics couldn't have changed that much over the last decade or two. Clearly there is more to being fat than carrying a fat gene. As we are all aware, you have to eat to become overweight. The fault is not in our stars, it is in our diets. And our diets have changed quite a bit over the last few decades.
What's wrong with our diets? That, of course, is one of the most important health questions of today. Our diets have changed a lot over the last few decades, starting with the movement in the mid 1970's to cut down the fat that we eat, mistakenly thinking that fat was the cause of high cholesterol and lipid problems. This led to the widespread substitution of calories from fat with calories from carbohydrates, particularly high fructose corn syrup and related additives. Nowhere have the substitutions been more dramatic than in fast foods and prepared foods. A high carbohydrate diet is a disaster for someone who is at risk of metabolic syndrome; it is the quickest way to get fat.
As the number of fat people increases, we are starting to see increases in diabetes, hypertension, and knee replacements. Obesity is linked to 1 in 5 deaths in our country. Finding more of the genes that cause people to be overweight will help to identify those at risk, so they can take steps to prevent it. And better yet, these gene mutations may provide targets for the creation of drugs to reverse the condition. The pharmaceutical industry is very interested in finding these genes: imagine if you could produce a pill that 50% of the entire population would have to take every day, for the rest of their lives, to prevent them from being fat!
Sadly, we do not have this pill to reverse metabolic syndrome, at least not at the present time. So, like many other diseases that are sensitive to the foods we eat -- hypertension, diabetes, gluten-sensitivity, and so on--the answer is still in controlling the diet.
But take heart. Now you can relax, forget the accusations and stop
blaming yourself. Enjoy those Christmas cookies and holiday treats today. Your diet starts on January 1.
Monday, November 24, 2014
The continuing relevance of Immanuel Kant
by Emrys Westacott
Immanuel Kant (1724-1804) is widely touted as one of the greatest thinkers in the history of Western civilization. Yet few people other than academic philosophers read his works, and I imagine that only a minority of them have read in its entirety the Critique of Pure Reason, generally considered his magnum opus. Kantian scholarship flourishes, with specialized journals and Kant societies in several countries, but it is largely written by and for specialists interested in exploring subtleties and complexities in Kant's texts, unnoticed influences on his thought, and so on. Some of Kant's writing is notoriously difficult to penetrate, which is why we need scholars to interpret his texts for us, and also why, in two hundred years, he has never made it onto the New York Times best seller list. And some of the ideas that he considered central to his metaphysics–for instance, his views about space, time, substance, and causality–are widely held to have been superseded by modern physics.
So what is so great about Kant? How is his philosophy still relevant today? What makes his texts worth studying and his ideas worth pondering? These are questions that could occasion a big book. What follows is my brief two penn'th on Kant's contribution to modern ways of thinking. I am not suggesting that Kant was the first or the only thinker to put forward the ideas mentioned here, or that they exhaust what is valuable in his philosophy. My purpose is just to identify some of the central strains in his thought that remain remarkably pertinent to contemporary debates.
1. Kant recognized that in the wake of the scientific revolution, what we call "knowledge" needed to be reconceived. He held that we should restrict the concept of knowledge to scientific knowledge–that is, to claims that are, or could be, justified by scientific means.
2. He identified the hallmark of scientific knowledge as what can be verified by empirical observation (plus some philosophical claims about the framework within which such observations occur). Where this isn't possible, we don't have knowledge; we have, instead, either pseudo-science (e.g. astrology), or unrestrained speculation (e.g. religion).
3. He understood that both everyday life and scientific knowledge rests on, and is made orderly, by some very basic assumptions that aren't self-evident but can't be entirely justified by empirical observations. For instance, we assume that the physical world will conform to mathematical principles. Kant argues in the Critique of Pure Reason that our belief that every event has a cause is such an assumption; perhaps, also, our belief that effects follow necessarily from their causes; but many today reject his classification of such claims as "synthetic a priori." Regardless of whether one agrees with Kant's account of what these assumptions are, his justification of them is thoroughly modern since it is essentially pragmatic. They make science possible. More generally, they make the world knowable. Kant in fact argues that in their absence our experience from one moment to the next would not be the coherent and intelligible stream that it is.
4. Kant claims that nothing in our experience is just "given" to us in a pure form unadulterated by the way we think. Our cognitive apparatus is always both receptive and active. Variations on this theme have become commonplace in modern philosophy, psychology, anthropology, and linguistics. What we call "facts" or "data" are theory-laden or concept-laden. Hegel, Nietzsche, Sellars, and Kuhn are among those who have developed this insight. Some, like Hilary Putnam, take it further, arguing that so-called facts are value-laden since how we apply concepts like causality reflects our interests. As William James famously remarked, "the trail of the human serpent is over everything."
5. Kant never lost sight of the fact that while modern science is one of humanity's most impressive achievements, we are not just knowers: we are also agents who make choices and hold ourselves responsible for our actions. In addition, we have a peculiar capacity to be affected by beauty, and a strange inextinguishable sense of wonder about the world we find ourselves in. Feelings of awe, an appreciation of beauty, and an ability to make moral choices on the basis of rational deliberation do not constitute knowledge, but this doesn't mean they lack value. On the contrary. But a danger carried by the scientific understanding of the world is that its power and elegance may lead us to undervalue those things that don't count as science.
6. According to Kant, the very nature of science means that it is limited to certain kinds of understanding and explanation, and these will never satisfy us completely. For as he says in the first sentence of the Critique, human reason has this peculiarity: it is driven by its very nature to pose questions that it is incapable of answering. Now hardheaded types may dismiss out of hand as not worth asking any questions that don't admit of scientific answers. This, one imagines, is Mr. Spock's position, and possibly such an attitude will one day take over completely. But I suspect Kant is right on this matter for two reasons.
One reason is that in our search for explanations we find it hard to be content with brute contingency. If we ask, "Why did this happen?" we will not be satisfied with the answer, "It just did." If we ask, "Why are things this way?" we expect more than, "That's just the way things are." Yet however deep science penetrates into the origin of things or the nature of things, it never seems to eliminate that element of contingency, and it is hard to see how it ever can. Leibniz's question, "Why is there something rather than nothing?" will always be waiting.
A second reason, which I suspect is related to the first, is that some questions we pose probably can't be answered, yet we ask them anyway because they express an abiding sense of wonder, mystery, concern, gratitude or despair over the conditions of our existence. Why am I this particular subject of experience? Why am I alive now and not at some other time? What should I do with my life? Why do I love this person, and why is our love so important? Such thoughts may take the form of questions, but they are really expressions of amazement and perplexity. The feelings expressed fuel religion, poetry, music, and the other arts. They also often accompany experiences we think of as especially valuable or profound: for instance, being present at a birth or a death, feeling great love, witnessing heroism, or encountering overwhelming natural beauty.
Kant's introduced the concept of the "thing in itself" to refer to reality as it is independent of our experience of it and unstructured by our cognitive constitution. The concept was harshly criticized in his own time and has been lambasted by generations of critics since. A standard objection to the notion is that Kant has no business positing it given his insistence that we can only know what lies within the limits of possible experience. But a more sympathetic reading is to see the concept of the "thing in itself" as a sort of placeholder in Kant's system; it both marks the limits of what we can know and expresses a sense of mystery that cannot be dissolved, the sense of mystery that underlies our unanswerable questions. Through both of these functions it serves to keep us humble.
7. Kant reflected more deeply than anyone before him on the growing conflict between the emerging scientific picture of the world (including its account of human nature) and the conventional, non-scientific notions that inform the way we think about the world and ourselves in everyday life. Some of these conflicts were resolved fairly easily. Copernicus challenged the common view that the sun moved while the earth was stationary. Accepting this new idea did mean displacing the earth from the center of the universe–a significant shift–but after some initial resistance the new model came to be generally accepted. The old way of thinking was seen to be understandable, given how things appear, but false.
Some conflicts, however, were more troubling. Most people in Kant's Europe were Christians. Christianity posits a God who created the world and dispenses cosmic justice. Yet this hypothesis has no place within science since it cannot be tested by scientific means. Kant, who had no truck with organized religion but seems to have had some sort of religious belief, settled this problem by restricting the scope of the contestants. Science tells us how things are in the spatio-temporal world we inhabit and experience, and what it tells us counts as knowledge. Religion speculates about what lies beyond this world. Such speculations produce articles of faith that may help people live better lives, and in this way they may be valuable. But they don't constitute knowledge. In Kant's famous formulation, he "found it necessary to deny knowledge in order to make room for faith." This solution to the conflict between science and religion is pretty much the one that has become generally accepted in the West, particularly among intellectuals. Religion is granted its own turf just so long as it doesn't encroach onto science's turf by claiming to offer knowledge. Inevitably, though, as science's stock has risen continuously since Kant's time, religion's stock has fallen, at least in the most modernized societies and among the intelligentsia. In these quarters God continues to die, urged on by Richard Dawkins and co..
But the conflict that really exercised Kant was between determinism, which was very much part of the new scientific picture, and our belief that we have free will. This troubled him more because he was much more concerned with morality than with religion. For him, religion is virtually a handmaiden to morality: faith can help people be good. But it is our capacity for acting morally–doing something simply because we think it is the right thing to do, regardless of our own interests–is what ultimately gives our lives dignity and value. We only have this capacity, however, if we have free will. And determinism, which sees every event, including our choices and actions, as the predictable effect of prior causes or states of affairs, implies that free will is an illusion, just as the apparent motion of the sun turned out to be an illusion.
What to do? Kant does not try to find a place for free will within the scientific picture. He also rejects the approach favoured by Hume which involves redefining free will in a way that makes it compatible with determinism. Compatibilism in one form or another continues to be popular and is defended by eminent thinkers like Daniel Dennett, but Kant rejects it as a "wretched subterfuge." His way of dealing with the problem, as I see it, is to say that it can't be resolved. The opposition between the scientific picture and our self-conception as beings capable of radical autonomy simply won't go away.
Two centuries later the problem of free will remains one of those issues where the conflict between science and conventional everyday thinking is especially sharp. Much worthwhile work has been done on the problem, yet Kant's account of the dilemma seems to describe the present situation pretty well. On the one hand, we can't find a place for free will within the scientific description of a human being. On the other hand, we can't jettison the notion that we are ultimately responsible for some of our decisions. We assume this about ourselves and others every day in all our ordinary activities. Even the most hard-boiled determinists tend to assume, when they engage in debate, that they and their opponents have some degree of choice regarding what they believe, and that this choice can be influenced by reasons that don't operate in the same manner as physical causes. Kant pretty much tells us that we just have to live with this tension since we can neither prove we have free will nor live as if we don't.
Naturally, there are parts of Kant's philosophy that no longer seem especially relevant, and Kant, like everyone else, had his foibles, failings, and blind spots. But there is a tremendously impressive depth to his reflections on the problems that confront humanity with the onset of modernity. And there is also an extraordinary breadth to his thinking, for as a systematic philosopher he illuminates the connections between metaphysics, science, morality, art, religion, and everyday experience. Ultimately, what he offers goes well beyond the construction of arguments or the analysis of concepts: what he offers, to his own time and to ours, is a penetrating account of the human condition in the age of science.
 Now that indeterminacy as part of quantum theory is included in the scientific picture some philosophers have sought to defend the idea of free will as something that quantum indeterminacy makes possible. But this position does not enjoy wide support.
Monday, October 13, 2014
Moral Time: Does Our Internal Clock Influence Moral Judgments?
by Jalees Rehman
Does morality depend on the time of the day? The study "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior" published in October of 2013 by Maryam Kouchaki and Isaac Smith suggested that people are more honest in the mornings, and that their ability to resist the temptation of lying and cheating wears off as the day progresses. In a series of experiments, Kouchaki and Smith found that moral awareness and self-control in their study subjects decreased in the late afternoon or early evening. The researchers also assessed the degree of "moral disengagement", i.e. the willingness to lie or cheat without feeling much personal remorse or responsibility, by asking the study subjects to respond to questions such as "Considering the ways people grossly misrepresent themselves, it's hardly a sin to inflate your own credentials a bit" or "People shouldn't be held accountable for doing questionable things when they were just doing what an authority figure told them to do" on a scale from 1 (strongly disagree) to 7 (strongly agree). Interestingly, the subjects who strongly disagreed with such statements were the most susceptible to the morning morality effect. They were quite honest in the mornings but significantly more likely to cheat in the afternoons. On the other hand, moral disengagers, i.e. subjects who did not think that inflating credentials or following questionable orders was a big deal, were just as likely to cheat in the morning as they were in the afternoons.
Understandably, the study caused quite a bit of ruckus and became one of the most widely discussed psychology research studies in 2013, covered widely by blogs and newspapers such as the Guardian "Keep the mornings honest, the afternoons for lying and cheating" or the German Süddeutsche Zeitung "Lügen erst nach 17 Uhr" (Lying starts at 5 pm). And the findings of the study also raised important questions: Should organizations and businesses take the time of day into account when assigning tasks to employees which require high levels of moral awareness? How can one prevent the "moral exhaustion" in the late afternoon and the concomitant rise in the willingness to cheat? Should the time of the day be factored into punishments for unethical behavior?
One question not addressed by Kouchaki and Smith was whether the propensity to become dishonest in the afternoons or evenings could be generalized to all subjects or whether the internal time in the subjects was also a factor. All humans have an internal body clock – the circadian clock- which runs with a period of approximately 24 hours. The circadian clock controls a wide variety of physical and mental functions such as our body temperature, the release of hormones or our levels of alertness. The internal clock can vary between individuals, but external cues such as sunlight or the social constraints of our society force our internal clocks to be synchronized to a pre-defined external time which may be quite distinct from what our internal clock would choose if it were to "run free". Free-running internal clocks of individuals can differ in terms of their period (for example 23.5 hours versus 24.4 hours) as well as the phases of when individuals would preferably engage in certain behaviors. Some people like to go to bed early, wake up at 5 am or 6 am on their own even without an alarm clock and they experience peak levels of alertness and energy before noon. In contrast to such "larks", there are "owls" among us who prefer to go to bed late at night, wake up at 11 am, experience their peak energy levels and alertness in the evening hours and like to stay up way past midnight.
It is not always easy to determine our "chronotype" – whether we are "larks", "owls" or some intermediate thereof – because our work day often imposes its demands on our internal clocks. Schools and employers have set up the typical workday in a manner which favors "larks", with work days usually starting around 7am – 9am. In 1976, the researchers Horne and Östberg developed a Morningness-Eveningness Questionnaire to investigate what time of the day individuals would prefer to wake up, work or take a test if it was entirely up to them. They found that roughly 40% of the people they surveyed had an evening chronotype!
If Kouchaki and Smith's findings that cheating and dishonesty increases in the late afternoons applies to both morning and evening chronotype folks, then the evening chronotypes ("owls") are in a bit of a pickle. Their peak performance and alertness times would overlap with their propensity to be dishonest. The researchers Brian Gunia, Christopher Barnes and Sunita Sah therefore decided to replicate the Kouchaki and Smith study with one major modification: They not only assessed the propensity to cheat at different times of the day, they also measured the chronotypes of the study participants. Their recent paper ""The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day" confirms that Kouchaki and Smith findings that the time of the day influences honesty, but the observed effects differ among chronotypes.
After assessing the chronotypes of 142 participants (72 women, 70 men; mean age 30 years), the researchers randomly assigned them to either a morning session (7:00 to 8:30 am) or an evening session (12:00 am to 1:30 am). The participants were asked to report the outcome of a die roll; the higher the reported number, the more raffle tickets they would receive for a large prize, which served as an incentive to inflate the outcome of the roll. Since a die roll is purely random, one would expect that reported average of the die roll results would be similar across all groups if all participants were honest. Their findings: Morning people ("larks") tended to report higher die-roll numbers in the evening than in the morning – thus supporting the Kouchaki and Smith results- but evening people tended to report higher numbers in the morning than in the evening. This means that the morning morality effect and the idea of "moral exhaustion" towards the end of the day cannot be generalized to all. In fact, evening people ("owls") are more honest in the evenings.
Not so fast, say Kouchaki and Smith in a commentary published to together with the new paper by Gunia and colleagues. They applaud the new study for taking the analysis of daytime effects on cheating one step further by considering the chronotypes of the participants, but they also point out some important limitations of the newer study. Gunia and colleagues only included morning and evening people in their analysis and excluded the participants who reported an intermediate chronotype, i.e. not quite early morning "larks" and not true "owls". This is a valid criticism because newer research on chronotypes by Till Roenneberg and his colleagues at the University of Munich has shown that there is a Gaussian distribution of chronotypes. Few of us are extreme larks or extreme owls, most of us lie on a continuum. Roenneberg's approach to measuring chronotypes looks at the actual hours of sleep we get and distinguishes between our behaviors on working days and weekends because the latter may provide a better insight into our endogenous clock, unencumbered by the demands of our work schedule. The second important limitation identified by Kouchaki and Smith is that Gunia and colleagues used 12 am to 1:30 am as the "evening condition". This may be the correct time to study the peak performance of extreme owls and selected night shift workers but ascertaining cheating behavior at this hour is not necessarily relevant for the general workforce.
Neither the study by Kouchaki and Smith nor the new study by Gunia and colleagues provide us with a definitive answer as to how the external time of the day (the time according to the sun and our social environment) and the internal time (the time according to our internal circadian clock) affect moral decision-making. We need additional studies with larger sample sizes which include a broad range of participants with varying chronotypes as well as studies which assess moral decision-making not just at two time points but also include a range of time points (early morning, afternoon, late afternoon, evening, night, etc.). But the two studies have opened up a whole new area of research and their findings are quite relevant for the field of experimental philosophy, which uses psychological methods to study philosophical questions. If empirical studies are conducted with human subjects then researchers need to take into account the time of the day and the internal time and chronotype of the participants, as well as other physiological differences between individuals.
The exchange between Kouchaki & Smith and Gunia & colleagues also demonstrates the strength of rigorous psychological studies. Researcher group 1 makes a highly provocative assertion based on their data, researcher group 2 partially replicates it and qualifies it by introducing one new variable (chronotypes) and researcher group 1 then analyzes strengths and weaknesses of the newer study. This type of constructive criticism and dialogue is essential for high-quality research. Hopefully, future studies will be conducted to provide more insights into this question. By using the Roenneberg approach to assess chronotypes, one could potentially assess a whole continuum of chronotypes – both on working days and weekends – and also relate moral reasoning to the amount of sleep we get. Measurements of body temperature, hormone levels, brain imaging and other biological variables may provide further insight into how the time of day affects our moral reasoning.
Why is this type of research important? I think that realizing how dynamic moral judgment can be is a humbling experience. It is easy to condemn the behavior of others as "immoral", "unethical" or "dishonest" as if these are absolute pronouncements. Realizing that our own judgment of what is considered ethical or acceptable can vary because of our internal clock or the external time of the day reminds us to be less judgmental and more appreciative of the complex neurobiology and physiology which influence moral decision-making. If future studies confirm that the internal time (and possibly sleep deprivation) influences moral decision-making, then we need to carefully rethink whether the status quo of forcing people with diverse chronotypes into a compulsory 9-to-5 workday is acceptable. Few, if any, employers and schools have adapted their work schedules to accommodate chronotype diversity in human society. Understanding that individualized work schedules for people with diverse chronotypes may not only increase their overall performance but also increase their honesty might serve as another incentive for employers and schools to recognize the importance of chronotype diversity among individuals.
Brian C. Gunia, Christopher M. Barnes and Sunita Sah (2014) "The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day", Psychological Science (published online ahead of print on Oct 6, 2014).
Maryam Kouchaki and Isaac H. Smith (2014) "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior", Psychological Science 25(1) 95–102.
Till Roenneberg, Anna Wirz-Justice and Martha Merrow. (2003) "Life between clocks: daily temporal patterns of human chronotypes." Journal of Biological Rhythms 18:1: 80-90.
Monday, September 15, 2014
A Rank River Ran Through It
It says something about a city, I suppose, when there is heated debate over who first labeled it a dirty place. The phrase “dear dirty Dublin”, used as a badge of defiant honor in Ireland’s capital to this day, is often erroneously attributed to James Joyce. Joyce used the term in Dubliners (1914) a series of linked short stories about that city and its denizens. But the phase goes back at least to early nineteenth century and the literary circle surrounding Irish novelist Sydney Owenson (Lady Morgan) who remains best known for her novel The Wild Irish Girl (1806) which extols the virtues of wild Irish landscapes, and the wild, though naturally dignified, princess who lived there. Compared to the fresh wilderness of the Irish West, Dublin would have seemed dirty indeed.
The city into which I was born more than a century later was still a rough and tumble place. It was also heavily polluted. This was Dublin of the 1970s.
My earliest memories of the city center come from trips I took to my father’s office in Marlborough St, just north of the River Liffey which bisects the city. My father would take an eccentric route into the city, the “back ways” as he would call them, which though not getting us to the destination as promptly as he advertised, had the benefit of bringing us on a short tour of the city and its more unkempt quarters.
My father’s cars themselves were masterpieces of dereliction. Purchased when they were already in an advanced stage of decay, he would nurse them aggressively till their often fairly prompt demise. One car that he was especially proud of, a Volkswagen Type III fastback, which had its engine to the rear, developed transmission problems and its clutch failed. His repair consisted of a chord dangling over his shoulder and crossing the back seat into the engine. A tug at a precisely timed moment would shift the gears. A shoe, attached to the end of the chord and resting on my father’s shoulder, aided the convenient operation of this system. That car, like most the others in those less regulated times, was also a marvel of pollution generation, farting out clouds of blue-black exhaust which added to the billowy haze of leaded fumes issuing from the other disastrously maintained vehicles, all shuddering in and out of the city’s congested center at the beginning at end of each work day.
A route into the city that I especially liked took us west of the city center, and as we approached Christ Church Cathedral I would open the window to smell the roasting of the barley which emanated from the Guinness brewery in Liberties region of the city, down by the Liffey. Very promptly I would wind up the window again as we crossed over the bridge, since the reek of that river was legendarily bad.
The Irish playwright Brendan Behan wrote in his memoir Confessions of an Irish Rebel (1965), “Somebody once said that ‘Joyce has made of this river the Ganges of the literary world,’ but sometimes the smell of the Ganges of the literary world is not all that literary.”
Historically, the River Liffey received raw sewage from the city and though a medical report from the 1880s concluded that the Liffey was not “directly injurious to the health of the inhabitants” — in the opinion of these doctors crowded living and alcohol consumption were the main culprits — the report concluded nonetheless that the Liffey’s condition “is prejudicial to the interest of the city and the port of Dublin.” It was time to clear up the mess.
The smell of the Liffey like other polluted waterways came not just from the ingredients that spill into it, but also from algae that bloom upon the excess nutrients that both accompany the solid waste and that seeps into the water from the larger landscape. The death and sulfurous decay of those plants, contribute to those noisome aromas.
Despite the installation of a sewage system for the city in 1906 and its expansion in the 1940s and 1950s the smell of the river remained ripe as Brendan Behan attested. Even in the late 1970s the smell of the river persisted and was remarked upon in popular culture. The song “Summer in Dublin” by the band Bagatelle contains the lines, “I remember that summer in Dublin/And the Liffey it stank like hell.” It was a big hit in the summer of 1978.
So why did the smell persist? Part of the problem with the tenacity of the Liffey’s pollution, and its associated odors, is that the river is a tidal one. It ebbs and flows into polluted Dublin Bay into which raw sewage continued to be dumped long after the creation and expansion of municipal sewage treatment plants. The rancid smells of the River Liffey remained powerful as I was motored over it with my father in the 1970s.
On other occasions, this time with my mother, I would get to observe the streets of Dublin city at a leisurely pedestrian pace. She would take one of her six kids into the city on her Saturday morning shopping rounds and would walk the selected child into the ground. The footpaths of the city were strewn with litter — sweet wrappers, newspapers, paper bags, plastic bags, discarded fast-food, random scraps of paper, cigarette butts — dog feces dappled the curbs, vomit pooled in doorways, the narrow streets were car-congested, and at evening-time, snug on the smoke-belching bus trundling home, I’d watch the sun sinking, gloriously crimson, hazily defined, leaving behind the bituminously smoky atmosphere of Dublin for another day.
It seemed like there was no end in sight to Dublin’s pollution problem, but clearly the situation could not have been left to go on forever. And even if a nineteenth century medical commission was not impressed that Dublin’s environmental pollution, from the river at least, posed a grievous problem, nonetheless the ubiquitous squalor of the city was not conducive to the good health of the Dublin’s city. The stench of river, the garbage in the streets, the smog of the city had to be remediated. As one Reuters report from the autumn of 1988 reported: “A thick pall of smoke from thousands of coal fires has become trapped over Dublin in freezing, wind-free weather, leaving a million coughing Dubliners to face streets at midday so gloomy it looks as if night had already fallen.” The links between high levels of smog and increased death rates concerned the medical community and a spokesperson from a major Dublin hospital reported that "Even patients without respiratory complaints have been complaining about throat irritation and coughing." (Toronto Star).
So change eventually came, some of it, admittedly, compelled by European legislation, a reasonable price for Ireland’s economic union with Europe. Acting on the Air Pollution Act, 1987 the capital city was declared a smokeless zone in 1990. It became illegal to sell or distribute bituminous coal, the smokiest kind, in all parts of Dublin city and its suburbs. By the early 1990s the city had lost the aroma of soot and the Dublin sunset lost some of its luster, but, in compensation, its air quality dramatically improved. The smoke in Dublin city dropped from 192 mg per cubic meter of air in December, 1989, to a mere 48 microgrammes the following December.
The River Liffey is generally less aromatic these days, though it is still very much a polluted urban river. Massive improvements, including the building of a new treatment plant near the harbor about ten years ago, has reduced raw sewage both in the river and in Dublin Bay. That being said the levels of faecal coliform, that is, E coli, associated with human waste, remains "disturbingly excessive" in some stretches of the River Liffey. There are heavy odors emanating from the new plant, an expensive problem that will need to be resolved.
I glanced down at the river this past summer while I was visiting home and saw that garbage still bobs up and down in the tidal waters, or clings to the algae at its bricked-up banks, before being inexorably tugged out to sea.
Follow me on Twitter @DublinSoil for 140 character updates on my columns. Links to previous 3QD columns here.
Builders and Blocks - Engineering Blood Vessels with Stem Cells
by Jalees Rehman
Back in 2001, when we first began studying how regenerative cells (stem cells or more mature progenitor cells) enhance blood vessel growth, our group as well as many of our colleagues focused on one specific type of blood vessel: arteries. Arteries are responsible for supplying oxygen to all organs and tissues of the body and arteries are more likely to develop gradual plaque build-up (atherosclerosis) than veins or networks of smaller blood vessels (capillaries). Once the amount of plaque in an artery reaches a critical threshold, the oxygenation of the supplied tissues and organs becomes compromised. In addition to this build-up of plaque and gradual decline of organ function, arterial plaques can rupture and cause severe sudden damage such as a heart attack. The conventional approach to treating arterial blockages in the heart was to either perform an open-heart bypass surgery in which blocked arteries were manually bypassed or to place a tube-like "stent" in the blocked artery to restore the oxygen supply. The hope was that injections of regenerative cells would ultimately replace the invasive procedures because the stem cells would convert into blood vessel cells, form healthy new arteries and naturally bypass the blockages in the existing arteries.
As is often the case in biomedical research, this initial approach turned out to be fraught with difficulties. The early animal studies were quite promising and the injected cells appeared to stimulate the growth of blood vessels, but the first clinical trials were less successful. It was very difficult to retain the injected cells in the desired arteries or tissues, and even harder to track the fate of the cells. Which stem cells should be injected? Where should they be injected? How many? Can one obtain enough stem cells from an individual patient so that one could use his or her own cells for the cell therapy? How does one guide the injected cells to the correct location, and then guide the cells to form functional blood vessel structures? Would the stem cells of a patient with chronic diseases such as diabetes or high blood pressure be suitable for therapies, or would such a patient have to rely on stem cells from healthier individuals and thus risk the complication of immune rejection?
The complexity of blood-vessel generation became increasingly apparent, both when studying the biology of stem cells as well as when designing and conducting clinical trials. A large clinical study published in 2013 studied the impact of bone marrow cell injections in heart attack patients and concluded that these injections did not result in any sustained benefit for heart function. Other studies using injections of patients' own stem cells into their hearts had led to mild improvements in heart function, but none of these clinical studies came close to fulfilling the expectations of cardiovascular patients, physicians and researchers. The upside to these failed expectations was that it forced the researchers in the field of cardiovascular regeneration to rethink their goals and approaches.
One major shift in my own field of interest - the generation of new blood vessels – was to reevaluate the validity of relying on injections of cells. How likely was it that millions of injected cells could organize themselves into functional blood vessels? Injections of cells were convenient for patients because they would not require the surgical implantation of blood vessels, but was this attempt to achieve a convenient therapy undermining its success? An increasing number of laboratories began studying the engineering of blood vessels in the lab by investigating the molecular cues which regulate the assembly of blood vessel networks, identifying molecular scaffolds which would retain stem cells and blood vessel cells and combining various regenerative cell types to build functional blood vessels. This second wave of regenerative vascular medicine is engineering blood vessels which will have to be surgically implanted into patients. This means that it will be much harder to get approval to conduct such invasive implantations in patients than the straightforward injections which were conducted in the first wave of studies, but most of us who have now moved towards a blood vessel engineering approach feel that there is a greater likelihood of long-term success even if it may take a decade or longer till we obtain our first definitive clinical results.
The second conceptual shift which has occurred in this field is the realization that blood vessel engineering is not only important for treating patients with blockages in their arteries. In fact, blood vessel engineering is critical for all forms of tissue and organ engineering. In the US, more than 120,000 people are awaiting an organ transplant but only a quarter of them will receive an organ in any given year. The number of people in need of a transplant will continue to grow but the supply of organs is limited and many patients will unfortunately die while waiting for an organ which they desperately need. The advances in stem cell biology have made it possible to envision creating organs or organoids (functional smaller parts of an organ) which could help alleviate the need for organs. One thing that most organs and tissues need is a network of tiny blood vessels that permeate the whole tissue: small capillary networks. For example, a liver built out of liver cells could never function without a network of tiny blood vessels which supply the liver cells with metabolites and oxygen. From an organ engineering point of view, microvessel engineering is just as important as the building of functional arteries.
In one of our recent projects, we engineered functional human blood vessels by combining bone marrow derived stem cells with endothelial cells (the cells which coat the inside of all blood vessels). It turns out that stem cells do not become endothelial cells but instead release a molecular signal – the protein SLIT3- which instructs the endothelial cells to assemble into networks. Using a high resolution microscope, we watched this process in real-time over a course of 72 hours in the laboratory and could observe how the endothelial cells began lining up into tube-like structures in the presence of the bone marrow stem cells. The human endothelial cells were like building blocks, the human bone marrow stem cells were the builders "overseeing" the construction. When we implanted the assembled blood vessel structures into mice, we could see that they were fully functional, allowing mouse blood to travel through them without leaking or causing any other major problems (see image, taken from reference 3).
I am sure that SLIT3 is just one of many molecular cues released by the stem cells to assemble functional networks and there are many additional mechanisms which still need to be discovered. We still need to learn much more about which "builders" and which "building blocks" are best suited for each type of blood vessel that we want to construct. The fact that human fat tissue can serve as an important resource for obtaining adult stem cells("builders") is quite encouraging, but we still know very little about the overall longevity of the engineered vessels, the best way to implant them into patients, and the key molecular and biomechanical mechanisms which will be required to engineer organs with functional blood vessels. It will be quite some time until the first fully engineered organs will be implanted in humans, but the dizzying rate of progress suggests that we can be quite optimistic.
References and links:
1. An overview article in "The Scientist" which describes the importance of blood vessel engineering for organ engineering (open access – can be read free of charge):
J Rehman "Building Flesh and Blood", The Scientist (2014), 28(5):48-53
2. An unusual and abundant source of adult stem cells which promote the formation of blood vessels: Fat tissue obtained from individuals who undergo a liposuction! (open access – can be read free of charge)
J Rehman "The Power of Fat" Aeon Magazine (2014)
3. The study which describes how adult stem cells release a protein (SLIT3) which organizes blood vessel cells into functional networks (open access – can be read free of charge):
J.D. Paul et al., "SLIT3-ROBO4 activation promotes vascular network formation in human engineered tissue and angiogenesis in vivo" J Mol Cell Cardiol (2013), 64:124-31.
Monday, August 18, 2014
The Psychology of Procrastination: How We Create Categories of the Future
by Jalees Rehman
"Do not put your work off till tomorrow and the day after; for a sluggish worker does not fill his barn, nor one who puts off his work: industry makes work go well, but a man who puts off work is always at hand-grips with ruin." Hesiod in "The Works and Days"
Paying bills, filling out forms, completing class assignments or submitting grant proposals – we all have the tendency to procrastinate. We may engage in trivial activities such as watching TV shows, playing video games or chatting for an hour and risk missing important deadlines by putting off tasks that are essential for our financial and professional security. Not all humans are equally prone to procrastination, and a recent study suggests that this may in part be due to the fact thatthe tendency to procrastinate has a genetic underpinning. Yet even an individual with a given genetic make-up can exhibit a significant variability in the extent of procrastination. A person may sometimes delay initiating and completing tasks, whereas at other times that same person will immediately tackle the same type of tasks even under the same constraints of time and resources.
A fully rational approach to task completion would involve creating a priority list of tasks based on a composite score of task importance and the remaining time until the deadline. The most important task with the most proximate deadline would have to be tackled first, and the lowest priority task with the furthest deadline last. This sounds great in theory, but it is quite difficult to implement. A substantial amount of research has been conducted to understand how our moods, distractability and impulsivity can undermine the best laid plans for timely task initiation and completion. The recent research article "The Categorization of Time and Its Impact on Task Initiation" by the researchers Yanping Tu (University of Chicago) and Dilip Soman (University of Toronto) investigates a rather different and novel angle in the psychology of procrastination: our perception of the future.
Tu and Soman hypothesized that one reason for why we procrastinate is that we do not envision time as a linear, continuous entity but instead categorize future deadlines into two categories, the imminent future and the distant future. A spatial analogy to this hypothesized construct is how we categorize distances. A city located at a 400 kilometer distance may be considered as being spatially closer to us if it is located within the same state than another city which may be physically closer (e.g. only 300 kilometers away) but located in a different state. The categories "in my state" and "outside of my state" therefore interfere with the perception of the actual physical distance.
In an experiment to test their time category hypothesis, the researchers investigated the initiation of tasks by farmers in a rural community in India as part of a larger project aimed at helping farmers develop financial literacy and skills. The participants (n=295 male farmers) attended a financial literacy lecture. The farmers learned that they would receive a special financial incentive if they opened a bank account, completed the required paperwork and accumulated at least 5,000 rupees in the account within the next 6 months. The farmers were also told they could open an account with zero deposit and complete the paperwork immediately while a bank representative was present at the end of the lecture. Alternatively, they could open the bank account at any point in time later by going to the closest branch of the bank. These lectures were held in June 2010 as well as in July 2010. In both cases, the six-month deadline was explicitly stated as being in December 2010 (for the June lectures) and in January 2011 (for the July lectures). The researchers surmised that even though the farmers were given the same six-month period to open the account and save the money, the December 2010 deadline would be perceived as the imminent future or an extension of the present because it fell in the same calendar year (2010) as the lecture, whereas the January 2011 deadline would be perceived as a far-off date in the distant future because it would fall in the next calendar year.
The results of this experiment were quite astounding: 32% of the farmers with the December 2010 deadline immediately opened the bank account whereas only 8% of the farmers with the January 2011 deadline followed suit. The contrast was even starker when it came to actually completing the whole task and saving the required money. 28% of the farmers with the December 2010 deadlines succeeded whereas only 4% of the farmers with the January 2011 deadline were successful. Even though both groups were given the same timeframe to complete the task (exactly six months) the same-year group had a six-to-seven fold higher success rate!
To test whether their idea of time categorization into the "like-the-present" future and the distant future could be generalized, the researchers conducted additional studies with students at the University of Toronto and the University of Chicago. These experiments yielded similar results, but also revealed that the distinction between "like-the-present" and the distant future is not only tied to the end of the calendar year but can also occur at the end of the month. Participants who were asked in April to complete a task with a deadline on April 30th indicated a far greater willingness to initiate the task than those with a deadline of May 1st, presumably because the April group thought of the deadline being an extension of the present (the month of April).
One of the most interesting experiments in their set of studies was the investigation of whether one could tweak the temporal perception of a deadline by providing visual cues which link the future date to the present. Tu and Soman conducted the study on March 9, 2011 (a Wednesday) and told participants that the study was about judging actions. The text provided to the participants read,
"Any action can be described in many ways; however the appropriateness of these descriptions may largely depend on the occasion on which the action occurs. In today's study, we are interested in your judgment of the appropriateness of descriptions of several actions. Please pick the one that you think is most appropriate in the occasion that is given to you in this study."
The researchers then showed the participants a calendar of March 2011 and told them that all the given actions would occur on March 13, 2011 (a Sunday). But the participants were divided into two groups, half of whom received a calendar in which the whole week was highlighted in one color, thus emphasizing that the Sunday deadline belonged to the same week ("like-the-present group"). The control group received a standard calendar in which the week-ends were colored differently from working days. The participants were provided with a list of 25 tasks and given two options for how they would describe each task. The two options reflected either a hands-on implementation approach versus more abstract approach. For example, for the task of "Caring for houseplants", they could choose between the hands-on option "Watering plants" or the more abstract option "Making the room look nice". Participants who saw the calendar in which the whole week (including Sunday) was depicted in the same color were significantly more likely to choose implementation options, suggesting that the visual cue was prepping their mind to think in terms of already implementing the tasks.
The work by Tu and Soman makes a strong case for the idea that we think of the future in categories and that this has a major impact on our tendency to procrastinate and take charge and expediently initiate and complete tasks. However, the work does have some limitations such as the fact that the researchers did not investigate whether the initial categorization is modified over time and whether specific reminders can help change the categorization. For example, if the farmers with the January 2011 deadline were to be approached again in the beginning of January 2011, would they then re-evaluate the "remote future" deadline and now consider it to be a "like-the-present" deadline that needs to be addressed immediately? Another limitation of the research article is that it does not explicitly state the ethical review of the studies, such as whether the farmers in India knew that their data was being used for a behavioral research study and whether provided informed consent.
This research provides fascinating insights into the science of procrastination and raises a number of important questions about how one should set deadlines. If the deadline is too far in the future, there is a much greater likelihood of thinking of it as a remote entity which may end up being ignored. If we want to ensure that tasks are initiated and completed in a timely manner, it may be important to emphasize the proximity of the deadline using visual cues (colors of calendars) or explicitly emphasizing the "like-the-present" nature such as stating "the deadline is in 30 days" instead of just mentioning a deadline date. The researchers did not study the impact of a countdown clock, but perhaps a countdown may be one way to help individuals build a cognitive bridge between the present and a looming deadline. Hopefully, government agencies, universities, corporations and other institutions which heavily rely on deadlines will pay attention to this research and re-evaluate how to convey deadlines in a manner which will reduce procrastination.
Yanping Tu and Dilip Soman (2014) "The Categorization of Time and Its Impact on Task Initiation" Journal of Consumer Research (published online on August 13, 2014 ahead of print).
Monday, August 11, 2014
How to say "No" to your doctor: improving your health by decreasing your health care
by Carol A. Westbrook
Has your doctor ever said to you, "You have too many doctors and are taking too many pills. It's time to cut back on both"? No? Well I have. Maybe it's time you brought it up with your doctors, too.
Do you really need a dozen pills a day to keep you alive, feeling well, and happy? Can you even afford them? Is it possible that the combination of meds that you are taking is making you feel worse, not better? Are you using up all of your sick leave and vacation time to attend multiple doctors' visits? Are you paying way much out of pocket for office visits and pharmacy co-pays, in spite of the fact that you have very good insurance? If this applies to you, then read on.
I am not referring to those of you with serious or chronic medical conditions, such as cancer, diabetes, and heart disease, who really do need those life-saving medicines and frequent clinic visits. I am referring here to the average healthy adult, who has no major medical problems, yet is taking perhaps twice as many prescription drugs and seeing multiple doctors 3 - 4 times as often as he would have done ten or fifteen years ago. Is he any healthier for it?
There is no doubt that modern medical care has made a tremendous impact on keeping us healthy and alive. The average life expectancy has increased dramatically over the last half century, from about 67 years in 1950 to almost 78 years today, and those who live to age 65 can expect to have, on average, almost 18 additional years to live! Some of this is due to lifestyle changes but most of the gain is due to advances in medical care, especially in two areas: cardiac disease and infectious diseases, especially in the treatment of AIDS. Cancer survival is just starting to make an impact as well. But how much additional longevity can we expect to gain by piling even more medical care on healthy individuals?
Too much health care can lower rather than improve your quality of life, and possibly even shorten it. For example, women who are given estrogens to relieve menopause symptoms have a significant risk of breast cancer. Blood pressure medicines can lead to unrecognized fatigue and depression; the same can be seen with sleeping pills, muscle relaxants, and anti-anxiety meds. Unnecessary X-rays or scans can lead to unneeded biopsies, which might result in serious complications. Even yearly PSA screening for prostate cancer can harm more men than it helps. Testosterone supplements can result in dangerously high blood counts. And of course, the money you spend on medications can be substantial, and the extra time you spend going to an office visit cuts into your leisure time and your income--directly impacting your quality of life.
How do you, the patient, break this cycle? First, you have to understand its cause. I'm sure you won't be surprised by my answer, which is "money." The "medical-industrial complex," operates on a fee-for-service business concept, and the way to increase profits is to increase services.
In the not-too-distant past, a person would have one General Practitioner (GP) or Primary Care Physician (PCP) who oversaw his health care. The GP would triage emergencies, treat chronic conditions such as hypertension, anemia or diabetes, diagnose new conditions that need intervention, and, when needed, refer the patient to a specialist for a visit or two. Extremely efficient for the patient, and somewhat time-consuming for the physician who, of course, would be reimbursed for his time. But today, private insurance and the CMS (Center for Medicare and Medicaid), the federal oversight agency, set limits on what can be charged for clinic visits by a GP vs. a specialist, sets costs for procedures, limits the allowable length of a clinic visit, and determines what diagnoses will be covered and what won't. From an economic perspective, this payment system incentivizes multiple short doctor visits to specialists rather than one-stop shopping with a GP. The resultant fragmentation of health care leads to more treatment, more medication, and poor coordination of care (see "The Bystander Effect in Medical Care: Why do I have so many doctors not taking care of me?" May 20, 2013).
The paradigm has shifted from "one patient, one doctor, many diagnoses" to "one patient, many diagnoses, and a doctor for each diagnosis." And with each new doctor comes a new set of medications, and many more return office visits, of which many are done by mid-level providers, that is, nurse practitioners or physician assistants. Mid-level providers tend to perpetuate the status quo; they can speed a patient quickly through a routine clinic visit, but may not have the medical expertise to diagnose new problems, further increasing referrals to specialists. The latest innovation in health care, electronic medical records, further perpetuate medical inertia by including no-brainer "check boxes" for return clinic visits, automatic prescription renewals, and referrals to other specialists in the system.
How can you, the patient, insure that you are getting only the amount of health care you need? It's not a good idea to stop medications on your own, and it can be intimidating to confront your doctor for advice on how to do with less of him! But if you are serious about cutting back on health care, start with the following steps:
1. Be familiar each medicine you are taking--its name, what it does, and what condition it is treating.
2. For each medication, do you still have the condition for which it was prescribed? If not, would the condition return if the medication were stopped? (Examples are hypertension, thyroid disease and diabetes). Was it prescribed for a short course of treatment that is completed, but no one bothered to discontinue the prescription? For example, if you were put on arthritis medication for a bad knee, and you subsequently had a knee replacement, the pain med should have been stopped.
3. Are you taking multiple medications for a single condition when perhaps one might suffice? Sometimes all that is needed are dose adjustments. For example, getting the correct dose of a blood pressure medication might require many re-checks and frequent dose changes, and it is easier for a provider to merely add a second or third pill.
4. Are some of your medications expensive, or have high co-pays? For each class of drug (e.g. antibiotics, sleeping pills, acid-reducers, cholesterol medication) your insurance company has a preferred choice. See if your doctor can switch to that one instead. You might need to ask your pharmacist, or call the insurance company directly, to get their list, and then ask the prescribing doctor if it's appropriate and, if so, to change the prescription (and cancel the other one).
5. How many doctors do you see regularly? In particular, how many specialists are you seeing and how often? Find out what is the purpose of any return visits they schedule, and whether some of this can be done by phone or electronic messaging. Or better yet, can the follow up be done by your PCP? Or has the problem been resolved and you are a victim of the "return to clinic" check box? You may have to make an extra visit to the specialist to get this information and end the relationship.
Once you get this information, here are some steps you can then take:
1. Discontinue as many medications as you can, or switch to acceptable, cheaper alternatives, with your doctor's assistance.
2. Review your personal list of prescribed medications, and compare it to the one in the medical record at your doctor's office. Remove all medications from the list that you are not actively taking, or that have already been discontinued, and make sure this is reflected in the medical record. And by all means, confirm that it is not on auto-renewal at your pharmacy.
3. Cut down the number of doctor's visit, once you have determined which specialists you need to see, and which one don't add anything to your health care.
4. Prioritize and simplify your ongoing medical care. Mid-level practitioners are great for maintenance of existing chronic conditions, but when a condition changes, or there is a new problem, insist on seeing the doctor instead. (Most of my inappropriate referrals come from mid-levels who are trying to solve a problem they don't have the training to solve.)
5. Ask your PCP to interpret and prioritize your visits to specialists, and for the specialist to discuss and coordinate your care with your PCP. If your PCP is not accessible or interested, consider finding another one.
6. Make use of electronic messaging, email, or phone calls when possible, to replace clinic visits.
7. Adopt lifestyle changes suggested by your doctor that might help you avoid taking additional medication, such as weight loss, exercise, smoking cessation, diet modification. If you go through with this, ask for feedback from your doctor, who should be willing to re-evaluate your meds and your health--after all, he suggested it.
Now let's turn the tables and see how difficult this can be for the doctor. When I see someone who is stuck in the web of medical inertia, I may say, "You have too many doctors and are taking too many pills. It's time to cut back on both." I am often met with resistance. Surprisingly, many people prefer to continue on the way they are. They don't want to hear that they don't need all these medications, or that their symptoms are due to depression or anxiety. They would rather take a pill than stop smoking, or lose weight.
For the rest, I do my best to help. I'm reluctant to stop medications started by another doctor; however, I can offer to help review medications and diagnoses. I can contact the doctor and see if the medication is necessary. I'll help to find cheaper alternatives when I can. As a rule, I don't renew medications that I didn't originally prescribe. For patients whose condition I am managing, I'll try to do a lot of my follow up by email or messaging, taking advantage of the electronic record. Every little bit helps.
Cutting back on medical care is a slow process on an individual level, and we physicians are just as frustrated as you are with the excesses in the system. The situation is not going to be improved by more insurance, but by reform of the entire system--which is unlikely to happen in my lifetime unless patients get involved and start demanding a change.
When I brought up this topic with friends, I was amazed to find how many had stories to tell about their personal experience with excessive health care. Do you, too, want to make a change? Please feel free to share your stories here. Maybe we can start to make a difference.
The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
Monday, June 30, 2014
The Road to Bad Science Is Paved with Obedience and Secrecy
by Jalees Rehman
We often laud intellectual diversity of a scientific research group because we hope that the multitude of opinions can help point out flaws and improve the quality of research long before it is finalized and written up as a manuscript. The recent events surrounding the research in one of the world's most famous stem cell research laboratories at Harvard shows us the disastrous effects of suppressing diverse and dissenting opinions.
The infamous "Orlic paper" was a landmark research article published in the prestigious scientific journal Nature in 2001, which showed that stem cells contained in the bone marrow could be converted into functional heart cells. After a heart attack, injections of bone marrow cells reversed much of the heart attack damage by creating new heart cells and restoring heart function. It was called the "Orlic paper" because the first author of the paper was Donald Orlic, but the lead investigator of the study was Piero Anversa, a professor and highly respected scientist at New York Medical College.
Anversa had established himself as one of the world's leading experts on the survival and death of heart muscle cells in the 1980s and 1990s, but with the start of the new millennium, Anversa shifted his laboratory's focus towards the emerging field of stem cell biology and its role in cardiovascular regeneration. The Orlic paper was just one of several highly influential stem cell papers to come out of Anversa's lab at the onset of the new millenium. A 2002 Anversa paper in the New England Journal of Medicine – the world's most highly cited academic journal –investigated the hearts of human organ transplant recipients. This study showed that up to 10% of the cells in the transplanted heart were derived from the recipient's own body. The only conceivable explanation was that after a patient received another person's heart, the recipient's own cells began maintaining the health of the transplanted organ. The Orlic paper had shown the regenerative power of bone marrow cells in mouse hearts, but this new paper now offered the more tantalizing suggestion that even human hearts could be regenerated by circulating stem cells in their blood stream.
A 2003 publication in Cell by the Anversa group described another ground-breaking discovery, identifying a reservoir of stem cells contained within the heart itself. This latest coup de force found that the newly uncovered heart stem cell population resembled the bone marrow stem cells because both groups of cells bore the same stem cell protein called c-kit and both were able to make new heart muscle cells. According to Anversa, c-kit cells extracted from a heart could be re-injected back into a heart after a heart attack and regenerate more than half of the damaged heart!
These Anversa papers revolutionized cardiovascular research. Prior to 2001, most cardiovascular researchers believed that the cell turnover in the adult mammalian heart was minimal because soon after birth, heart cells stopped dividing. Some organs or tissues such as the skin contained stem cells which could divide and continuously give rise to new cells as needed. When skin is scraped during a fall from a bike, it only takes a few days for new skin cells to coat the area of injury and heal the wound. Unfortunately, the heart was not one of those self-regenerating organs. The number of heart cells was thought to be more or less fixed in adults. If heart cells were damaged by a heart attack, then the affected area was replaced by rigid scar tissue, not new heart muscle cells. If the area of damage was large, then the heart's pump function was severely compromised and patients developed the chronic and ultimately fatal disease known as "heart failure".
Anversa's work challenged this dogma by putting forward a bold new theory: the adult heart was highly regenerative, its regeneration was driven by c-kit stem cells, which could be isolated and used to treat injured hearts. All one had to do was harness the regenerative potential of c-kit cells in the bone marrow and the heart, and millions of patients all over the world suffering from heart failure might be cured. Not only did Anversa publish a slew of supportive papers in highly prestigious scientific journals to challenge the dogma of the quiescent heart, he also happened to publish them at a unique time in history which maximized their impact.
In the year 2001, there were few innovative treatments available to treat patients with heart failure. The standard approach was to use medications that would delay the progression of heart failure. But even the best medications could not prevent the gradual decline of heart function. Organ transplants were a cure, but transplantable hearts were rare and only a small fraction of heart failure patients would be fortunate enough to receive a new heart. Hopes for a definitive heart failure cure were buoyed when researchers isolated human embryonic stem cells in 1998. This discovery paved the way for using highly pliable embryonic stem cells to create new heart muscle cells, which might one day be used to restore the heart's pump function without resorting to a heart transplant.
The dreams of using embryonic stem cells to regenerate human hearts were soon squashed when the Bush administration banned the generation of new human embryonic stem cells in 2001, citing ethical concerns. These federal regulations and the lobbying of religious and political groups against human embryonic stem cells were a major blow to research on cardiovascular regeneration. Amidst this looming hiatus in cardiovascular regeneration, Anversa's papers appeared and showed that one could steer clear of the ethical controversies surrounding embryonic stem cells by using an adult patient's own stem cells. The Anversa group re-energized the field of cardiovascular stem cell research and cleared the path for the first human stem cell treatments in heart disease.
Instead of having to wait for the US government to reverse its restrictive policy on human embryonic stem cells, one could now initiate clinical trials with adult stem cells, treating heart attack patients with their own cells and without having to worry about an ethical quagmire. Heart failure might soon become a disease of the past. The excitement at all major national and international cardiovascular conferences was palpable whenever the Anversa group, their collaborators or other scientists working on bone marrow and cardiac stem cells presented their dizzyingly successful results. Anversa received numerous accolades for his discoveries and research grants from the NIH (National Institutes of Health) to further develop his research program. He was so successful that some researchers believed Anversa might receive the Nobel Prize for his iconoclastic work which had redefined the regenerative potential of the heart. Many of the world's top universities were vying to recruit Anversa and his group, and he decided to relocate his research group to Harvard Medical School and Brigham and Women's Hospital 2008.
There were naysayers and skeptics who had resisted the adult stem cell euphoria. Some researchers had spent decades studying the heart and found little to no evidence for regeneration in the adult heart. They were having difficulties reconciling their own results with those of the Anversa group. A number of practicing cardiologists who treated heart failure patients were also skeptical because they did not see the near-miraculous regenerative power of the heart in their patients. One Anversa paper went as far as suggesting that the whole heart would completely regenerate itself roughly every 8-9 years, a claim that was at odds with the clinical experience of practicing cardiologists. Other researchers pointed out serious flaws in the Anversa papers. For example, the 2002 paper on stem cells in human heart transplant patients claimed that the hearts were coated with the recipient's regenerative cells, including cells which contained the stem cell marker Sca-1. Within days of the paper's publication, many researchers were puzzled by this finding because Sca-1 was a marker of mouse and rat cells – not human cells! If Anversa's group was finding rat or mouse proteins in human hearts, it was most likely due to an artifact. And if they had mistakenly found rodent cells in human hearts, so these critics surmised, perhaps other aspects of Anversa's research were similarly flawed or riddled with artifacts.
At national and international meetings, one could observe heated debates between members of the Anversa camp and their critics. The critics then decided to change their tactics. Instead of just debating Anversa and commenting about errors in the Anversa papers, they invested substantial funds and efforts to replicate Anversa's findings. One of the most important and rigorous attempts to assess the validity of the Orlic paper was published in 2004, by the research teams of Chuck Murry and Loren Field. Murry and Field found no evidence of bone marrow cells converting into heart muscle cells. This was a major scientific blow to the burgeoning adult stem cell movement, but even this paper could not deter the bone marrow cell champions.
Despite the fact that the refutation of the Orlic paper was published in 2004, the Orlic paper continues to carry the dubious distinction of being one of the most cited papers in the history of stem cell research. At first, Anversa and his colleagues would shrug off their critics' findings or publish refutations of refutations – but over time, an increasing number of research groups all over the world began to realize that many of the central tenets of Anversa's work could not be replicated and the number of critics and skeptics increased. As the signs of irreplicability and other concerns about Anversa's work mounted, Harvard and Brigham and Women's Hospital were forced to initiate an internal investigation which resulted in the retraction of one Anversa paper and an expression of concern about another major paper. Finally, a research group published a paper in May 2014 using mice in which c-kit cells were genetically labeled so that one could track their fate and found that c-kit cells have a minimal – if any – contribution to the formation of new heart cells: a fraction of a percent!
The skeptics who had doubted Anversa's claims all along may now feel vindicated, but this is not the time to gloat. Instead, the discipline of cardiovascular stem cell biology is now undergoing a process of soul-searching. How was it possible that some of the most widely read and cited papers were based on heavily flawed observations and assumptions? Why did it take more than a decade since the first refutation was published in 2004 for scientists to finally accept that the near-magical regenerative power of the heart turned out to be a pipe dream.
One reason for this lag time is pretty straightforward: It takes a tremendous amount of time to refute papers. Funding to conduct the experiments is difficult to obtain because grant funding agencies are not easily convinced to invest in studies replicating existing research. For a refutation to be accepted by the scientific community, it has to be at least as rigorous as the original, but in practice, refutations are subject to even greater scrutiny. Scientists trying to disprove another group's claim may be asked to develop even better research tools and technologies so that their results can be seen as more definitive than those of the original group. Instead of relying on antibodies to identify c-kit cells, the 2014 refutation developed a transgenic mouse in which all c-kit cells could be genetically traced to yield more definitive results - but developing new models and tools can take years.
The scientific peer review process by external researchers is a central pillar of the quality control process in modern scientific research, but one has to be cognizant of its limitations. Peer review of a scientific manuscript is routinely performed by experts for all the major academic journals which publish original scientific results. However, peer review only involves a "review", i.e. a general evaluation of major strengths and flaws, and peer reviewers do not see the original raw data nor are they provided with the resources to replicate the studies and confirm the veracity of the submitted results. Peer reviewers rely on the honor system, assuming that the scientists are submitting accurate representations of their data and that the data has been thoroughly scrutinized and critiqued by all the involved researchers before it is even submitted to a journal for publication. If peer reviewers were asked to actually wade through all the original data generated by the scientists and even perform confirmatory studies, then the peer review of every single manuscript could take years and one would have to find the money to pay for the replication or confirmation experiments conducted by peer reviewers. Publication of experiments would come to a grinding halt because thousands of manuscripts would be stuck in the purgatory of peer review. Relying on the integrity of the scientists submitting the data and their internal review processes may seem naïve, but it has always been the bedrock of scientific peer review. And it is precisely the internal review process which may have gone awry in the Anversa group.
Just like Pygmalion fell in love with Galatea, researchers fall in love with the hypotheses and theories that they have constructed. To minimize the effects of these personal biases, scientists regularly present their results to colleagues within their own groups at internal lab meetings and seminars or at external institutions and conferences long before they submit their data to a peer-reviewed journal. The preliminary presentations are intended to spark discussions, inviting the audience to challenge the veracity of the hypotheses and the data while the work is still in progress. Sometimes fellow group members are truly skeptical of the results, at other times they take on the devil's advocate role to see if they can find holes in their group's own research. The larger a group, the greater the chance that one will find colleagues within a group with dissenting views. This type of feedback is a necessary internal review process which provides valuable insights that can steer the direction of the research.
Considering the size of the Anversa group – consisting of 20, 30 or even more PhD students, postdoctoral fellows and senior scientists – it is puzzling why the discussions among the group members did not already internally challenge their hypotheses and findings, especially in light of the fact that they knew extramural scientists were having difficulties replicating the work.
Retraction Watch is one of the most widely read scientific watchdogs which tracks scientific misconduct and retractions of published scientific papers. Recently, Retraction Watch published the account of an anonymous whistleblower who had worked as a research fellow in Anversa's group and provided some unprecedented insights into the inner workings of the group, which explain why the internal review process had failed:
"I think that most scientists, perhaps with the exception of the most lucky or most dishonest, have personal experience with failure in science—experiments that are unreproducible, hypotheses that are fundamentally incorrect. Generally, we sigh, we alter hypotheses, we develop new methods, we move on. It is the data that should guide the science.
In the Anversa group, a model with much less intellectual flexibility was applied. The "Hypothesis" was that c-kit (cd117) positive cells in the heart (or bone marrow if you read their earlier studies) were cardiac progenitors that could: 1) repair a scarred heart post-myocardial infarction, and: 2) supply the cells necessary for cardiomyocyte turnover in the normal heart.
This central theme was that which supplied the lab with upwards of $50 million worth of public funding over a decade, a number which would be much higher if one considers collaborating labs that worked on related subjects.
In theory, this hypothesis would be elegant in its simplicity and amenable to testing in current model systems. In practice, all data that did not point to the "truth" of the hypothesis were considered wrong, and experiments which would definitively show if this hypothesis was incorrect were never performed (lineage tracing e.g.)."
Discarding data that might have challenged the central hypothesis appears to have been a central principle.
According to the whistleblower, Anversa's group did not just discard undesirable data, they actually punished group members who would question the group's hypotheses:
"In essence, to Dr. Anversa all investigators who questioned the hypothesis were "morons," a word he used frequently at lab meetings. For one within the group to dare question the central hypothesis, or the methods used to support it, was a quick ticket to dismissal from your position."
The group also created an environment of strict information hierarchy and secrecy which is antithetical to the spirit of science:
"The day to day operation of the lab was conducted under a severe information embargo. The lab had Piero Anversa at the head with group leaders Annarosa Leri, Jan Kajstura and Marcello Rota immediately supervising experimentation. Below that was a group of around 25 instructors, research fellows, graduate students and technicians. Information flowed one way, which was up, and conversation between working groups was generally discouraged and often forbidden.
Raw data left one's hands, went to the immediate superior (one of the three named above) and the next time it was seen would be in a manuscript or grant. What happened to that data in the intervening period is unclear.
A side effect of this information embargo was the limitation of the average worker to determine what was really going on in a research project. It would also effectively limit the ability of an average worker to make allegations regarding specific data/experiments, a requirement for a formal investigation."
This segregation of information is a powerful method to maintain an authoritarian rule and is more typical for terrorist cells or intelligence agencies than for a scientific lab, but it would definitely explain how the Anversa group was able to mass produce numerous irreproducible papers without any major dissent from within the group.
In addition to the secrecy and segregation of information, the group also created an atmosphere of fear to ensure obedience:
"Although individually-tailored stated and unstated threats were present for lab members, the plight of many of us who were international fellows was especially harrowing. Many were technically and educationally underqualified compared to what might be considered average research fellows in the United States. Many also originated in Italy where Dr. Anversa continues to wield considerable influence over biomedical research.
This combination of being undesirable to many other labs should they leave their position due to lack of experience/training, dependent upon employment for U.S. visa status, and under constant threat of career suicide in your home country should you leave, was enough to make many people play along.
Even so, I witnessed several people question the findings during their time in the lab. These people and working groups were subsequently fired or resigned. I would like to note that this lab is not unique in this type of exploitative practice, but that does not make it ethically sound and certainly does not create an environment for creative, collaborative, or honest science."
Foreign researchers are particularly dependent on their employment to maintain their visa status and the prospect of being fired from one's job can be terrifying for anyone.
This is an anonymous account of a whistleblower and as such, it is problematic. The use of anonymous sources in science journalism could open the doors for all sorts of unfounded and malicious accusations, which is why the ethics of using anonymous sources was heavily debated at the recent ScienceOnline conference. But the claims of the whistleblower are not made in a vacuum – they have to be evaluated in the context of known facts. The whistleblower's claim that the Anversa group and their collaborators received more than $50 million to study bone marrow cell and c-kit cell regeneration of the heart can be easily verified at the public NIH grant funding RePORTer website. The whistleblower's claim that many of the Anversa group's findings could not be replicated is also a verifiable fact. It may seem unfair to condemn Anversa and his group for creating an atmosphere of secrecy and obedience which undermined the scientific enterprise, caused torment among trainees and wasted millions of dollars of tax payer money simply based on one whistleblower's account. However, if one looks at the entire picture of the amazing rise and decline of the Anversa group's foray into cardiac regeneration, then the whistleblower's description of the atmosphere of secrecy and hierarchy seems very plausible.
The investigation of Harvard into the Anversa group is not open to the public and therefore it is difficult to know whether the university is primarily investigating scientific errors or whether it is also looking into such claims of egregious scientific misconduct and abuse of scientific trainees. It is unlikely that Anversa's group is the only group that might have engaged in such forms of misconduct. Threatening dissenting junior researchers with a loss of employment or visa status may be far more common than we think. The gravity of the problem requires that the NIH – the major funding agency for biomedical research in the US – should look into the prevalence of such practices in research labs and develop safeguards to prevent the abuse of science and scientists.
Monday, May 12, 2014
When are you past your prime?
by Emrys Westacott
Recently I had a discussion with a couple of old friends–all of us middle-aged guys–about when one's powers start to decline. God only knows why this topic came up, but it seems to have become a hardy perennial of late. My friends argued that in just about all areas, physical and mental, we basically peak in our twenties, and by the time we turn forty we're clearly on the rocky road to decrepitude.
I disagreed. I concede immediately that this is true of most, perhaps all, physical abilities: speed, strength, stamina, agility, hearing, eyesight, the ability to recover from injury, and so on. The decline after forty may be slight and slow, but it's a universal phenomenon. Of course, we can become fitter through exercise and the eschewing of bad habits, but any improvement here is made possible by our being out of shape in the first place.
What about mental abilities? Again, it's pretty obvious that some of these typically decline after forty: memory, processing speed, the ability to think laterally, perhaps. Here too, the decline may be very gradual, but these capacities clearly do not seem to improve in middle age. Still, I think my friends focus too much on certain kinds of ability and generalize too readily from these across the rest of what we do with our minds. More specifically, I suspect they view the cognitive capabilities that figure prominently in and are especially associated with mathematics and science as somehow the core of thinking in general. Because of this, and because these capacities are more abstract and can be exercised before a person has acquired a great deal of experience or knowledge, certain abilities have come to be identified with sharpness as such, and one's performance at tasks involving quick mental agility or analytic problem solving is taken as a measure of one's raw intellectual horsepower.
A belief in pure abiity, disentangled from experiential knowledge, underlies notions like IQ. It has had a rather inglorious history, and it has been used at times to justify a distribution of educational resources favouring those who are already advantaged. Today it continues to interest those who prefer to see any assessments or evaluations expressed quantitatively wherever possible–-a preference that also reflects the current cultural hegemony of science. Yet what matters to us, really, shouldn't be abilities in the abstract--how quickly we can calculate, or how successfully we can recall information—but what we actually do with these or any other abilities we possess. Is there any reason to suppose that we make better use of what we've got before we're forty?
The prevailing view has long been that in the sciences people do their most important, original and creative work early. Einstein reportedly said that "a person who has not made his great contribution to science by the age of thirty will never do so." But he would say that, wouldn't he? After all, he worked out the theory of special relativity when he was twenty-six. But Einstein was perhaps generalizing hastily from his own case. A recent study entitled "Age and Scientific Genius," published by the National Bureau of Economic Research casts doubt on the prevailing view. After reviewing an extensive literature on the topic, the authors conclude:
In contrast to common perceptions, most great scientific contributions are not the product of precocious youngsters but rather come disproportionately in middle age. Moreover, perceptions that some fields, such as physics, feature systematically younger contributions than others do not stand up to empirical scrutiny.
Interestingly, the average age at which scientists produce their most important work is now several years older than it was in the early twentieth century when Einstein, Bohr, Heisenberg and co. were revolutionizing physics. One possible explanation of this is that at that time, because of the great paradigm shifts that had just taken place, young scientists didn't have to spend so much time learning about earlier theories that had been superseded. Today, however, the "burden of knowledge" that has to be assumed before one can expect to make an original contribution is greater.
But my main objection to my friends' claims about cognitive decline is not that they are wrong about the abilities central to scientific thinking, even if they are unduly pessimistic. After all, honesty obliges me to note that the same study of age and scientific genius cited above also makes this observation:
one of the salient features of Nobel Prize winners and great technological innovators over the 20th century is that, while contributions at young ages have become increasingly rare, the rate of decline in innovation potential later in life remains steep.
Sobering stuff if one happens to be, as the French say, d'un certain âge. No, in my view, the strongest objection to the claim that our mental powers peak in our twenties, or even in our thirties, is that in fields like literature, musical composition, and the visual arts, so many masterpieces are produced by people who are well past forty.
Now, as a philosopher I don't usually like to dirty my hands by doing empirical research, but in this case data is undeniably relevant. It's also interesting in its own right. Let's start with the visual arts. Since I don't claim any sort of expertise here, I took a shortcut andused as my representative sample the ten works that Guardian art critic Jonathan Jones considers "the greatest works of art ever." In two cases, the Chauvet cave paintings and the Parthenon sculptures, we can't say how old the artist was. But here are the other eight works, with the age of the artist when the work was completed given in brackets.
· Leonardo da Vinci, The Foetus in the Womb (c 58-61)
· Rembrandt, Self-Portrait with Two Circles (c 59-63)
· Jackson Pollock, One: Num ber 31 (38)
· Velázquez, Las Meninas (c 58)
· Picasso, Guernica (55)
· Michaelangelo (c 44-57)
· Cézanne, Mont Sainte-Victoire (painted 1902-4) (63-65)
Only two of these works were produced by artists under forty. And if Caravaggio and Pollack didn't produce too many more masterpieces after the one's mentioned here it wasn't necessarily due to declining powers: Caravaggio died at thirty-nine, Pollack at forty-four.
How about classical composers? Here, I didn't find a convenient list of "ten greatest compositions ever," so I simply made my own list of ten celebrated works by composers who had lived well beyond forty (which excludes the likes of Mozart, Mendelssohn, Schubert, and Chopin) and would figure high up on anyone's list of "greatest classical composers." The selection isn't random; it's made with a point to prove in mind. But I think it does that rather effectively since there is widespread agreement that the works mentioned are among the greatest produced by the composer in question. Again, the age of the composer when the work was completed is given in brackets.
· Bach, Mass in B (64)
· Handel, Messiah (57)
· Haydn, The Creation (66)
· Beethoven, Ninth Symphony (54)
· Verdi, Otello (74) [pictured]
· Wagner, Götterdämmerung (61)
· Tchaikovsky, Sixth Symphony (53)
· Dvorak, New World Symphony (52)
· Mahler, Das Lied von der Erde (48)
We might note in passing that several of these composers produced acclaimed masterpieces at an even later date (Verdi'sFalstaff, for instance, was completed when he was seventy-nine), and in some cases, the only thing preventing them doing this was that they dropped dead not long after finishing the work mentioned. Tchaikovsky, for instance died nine days after conducting the first performance of his sixth symphony.
Literature tells a similar story. Many writers have produced what is widely regarded as their finest work long past the age of forty. Feeding, as Wittgenstein says we shouldn't, on a diet of one-sided examples, drawn exclusively, I admit, from the Western canon, I offer the following fifteen instances to support my general point. The number in brackets is the age of the author when the work was published or finished.
· Sophocles, Oedipus at Colonus (c. 90)
· Dante, The Divine Comedy (49-53)
· Chaucer, The Canterbury Tales (55)
· Cervantes, Don Quixote Part I (57), Part II (67)
· Defoe, Robinson Crusoe (59)
· Swift, Gulliver's Travels (59)
· Eliot, Daniel Deronda (57)
· Hugo, Les Miserables (60)
· Tolstoy, Anna Karenina (49)
· Dostoyevsky, The Brothers Karamazov (59)
· Hardy, Tess of the D'Urbervilles (51)
· James, The Wings of a Dove (59)
· Wharton, The Age of Innocence (58)
· Morrison, Beloved (56)
One could extend this list pretty much indefinitely, but there is no need to given the status of the works mentioned, many of which represent their creator's most acclaimed artistic achievement. Of course, there are many literary masterpieces written by authors younger than forty, but it is remarkable how often, in such cases, the writer died young, quite possibly with their best works still to come. Jane Austen died at forty-one; Emily Bronte at thirty; Anton Chekhov at forty-four; Franz Kafka at thirty-nine. To be sure, there are some who produce their best work in their twenties or thirties and never produce much of comparable quality afterwards despite a long life. Melville published Moby Dick when he was thirty-two; Wordsworth had written nearly all his best poetry by the time he was forty. But such cases, while not exceptional, are certainly not typical. Anyway, my point is not to deny that great art can be produced by young people; it is to argue that the many great works of art produced by people in middle age and beyond support the idea that some of our important cognitive abilities can continue to grow rather than decline during those years.
On the face of it, I would say the evidence presented here falsifies the thesis that we are cognitively declining once we're past thirty, or even forty. But how might someone who wishes to defend this claim respond? Well, they might argue that after forty all our basic cognitive functions are indeed declining, but we are good at finding ways to compensate for this, rather as a soccer player in his mid-thirties masks his lack of pace with more astute positional awareness. But then the question arises: why not count this sort of ability as an important function that improves as one ages? Or they might argue that what makes the great achievements of the mature years possible is the greater knowledge base—both of skills (know how) and subject matter (know that) which long experience brings. To this one could respond in a similar manner, that making good use of one's experience is another cognitive function that often improves with age. And if that seems a little abstract, even casuistic, one could point to other, more specific abilities that it is plausible to believe can continue to develop in middle age and that help to explain mature achievements like Paradise Lost or The Brothers Karamzov: for instance, the capacity for empathy, objectivity, self-awareness, and a synthetic grasp of complex wholes—all of them elements of what we call wisdom.
Another objection to my argument could be that the geniuses I cite are not representative of humanity in general. Perhaps one of the things that differentiates them from us ordinary mortals is precisely the fact that their cognitive decline kicks in unusually late, which enables them to put their growing wealth of experience to exceptionally good use. Against this idea, though, I would argue that the evidence against a general deterioration of all one's basic faculties could be culled just as well from people working in many fields: sports coaches, politicians, lawyers, musicians, film-makers…..
Finally, anyone who thinks I've been criticizing a straw man can respond appropriately with a cheap ad hominem, pointing out that my thesis is patently self-serving, coming as it does from one who is much closer to sixty than to forty. In response, I would first remind the critic that the so-called straw men in question are good friends of mine and should not be treated so dismissively. And second, I will appeal to the authority of William James, who, in his famous essay "The Will to Believe," affirms that there are circumstances where "the desire for a certain kind of truth . . .brings about that special truth's existence."
Monday, April 28, 2014
Does Literary Fiction Challenge Racial Stereotypes?
by Jalees Rehman
A book is a mirror: if a fool looks in, do not expect an apostle to look out.
Georg Christoph Lichtenberg (1742-1799)
Reading literary fiction can be highly pleasurable, but does it also make you a better person? Conventional wisdom and intuition lead us to believe that reading can indeed improve us. However, as the philosopher Emrys Westacott has recently pointed out in his essay for 3Quarksdaily, we may overestimate the capacity of literary fiction to foster moral improvement. A slew of scientific studies have taken on the task of studying the impact of literary fiction on our emotions and thoughts. Some of the recent research has centered on the question of whether literary fiction can increase empathy. In 2013, Bal and Veltkamp published a paper in the journal PLOS One showing that subjects who read excerpts from literary texts scored higher on an empathy scale than those who had read a nonfiction text. This increase in empathy was predominantly found in the participants who felt "transported" (emotionally and cognitively involved) into the literary narrative. Another 2013 study published in the journal Science by Kidd and Castano suggested that reading literary fiction texts increased the ability to understand and relate to the thoughts and emotions of other humans when compared to reading either non-fiction or popular fiction texts.
Scientific assessments of how fiction affects empathy are fraught with difficulties and critics raise many legitimate questions. Do "empathy scales" used in psychology studies truly capture the psychological phenomenon of "empathy"? How long does the effect of reading literary fiction last and does it translate into meaningful shifts in behavior? How does one select appropriate literary fiction texts and control texts, and conduct such studies in a heterogeneous group of participants who probably have very diverse literary tastes? Kidd and Castano, for example, used an excerpt of The Tiger's Wife by Téa Obreht as a literary fiction text because the book was a finalist for the National Book Award, whereas an excerpt of Gone Girl by Gillian Flynn was used as a ‘popular fiction' text even though it was long-listed for the prestigious Women's Prize for Fiction.
The recent study "Changing Race Boundary Perception by Reading Narrative Fiction" led by the psychology researcher Dan Johnson from Washington and Lee University took a somewhat different approach. Instead of assessing global changes in empathy, Johnson and colleagues focused on a more specific question. Could the reading of a fictional narrative change the perception of racial stereotypes?
Johnson and his colleagues chose an excerpt from the novel "Saffron Dreams" by the Pakistani-American author Shaila Abdullah. In this novel, the protagonist is a recently widowed pregnant Muslim woman Arissa whose husband Faizan was working in the World Trade Center on September 11, 2001 and killed when the building collapsed. The excerpt from the novel provided to the participants in Johnson's research study describes a scene in which Arissa is traveling alone late at night and is attacked by a group of male teenagers. The teenagers mock and threaten her with a knife because of her Muslim head-scarf (hijab), use racial and ethnic slurs as well as make references to the 9/11 attacks. The narrative excerpt does not specifically mention the word Caucasian, but one of the attackers is identified as blond and another one has a swastika tattoo. They do not believe her when she tries to explain that she was also a victim of the 9/11 attacks and instead refer to her as belonging to a "race of murderers".
The researchers used a second text in their experiment, a synopsis of the literary excerpt from Saffron Dreams. This allowed Johnson colleagues to distinguish between the effects of the literary narrative style with its inner monologue and description of emotions versus the effects of the content. Samples of the literary text and the synopsis used by the researchers can be found at the end of this article (scroll down) for those readers who would like to compare their own reactions to the two texts.
The researchers recruited 68 U.S. participants (mean age 36 years, roughly half were female, 81% Caucasian, reporting seven different religious affiliations but none of them were Muslim) and randomly assigned them to the full literary narrative group (33 participants) or the synopsis group (35 participants). After the participants read the texts, they were asked to complete a number of questions about the text and its impact on them. They were also presented with 18 male faces that the researchers had designed with a special software in a manner that they appeared ambiguous in terms of Caucasian or Arab characteristics. For example, the faces combined blue eyes with darker skin tones. The participants were asked to grade the faces as being:
2) mixed, more Arab than Caucasian
3) mixed, more Caucasian than Arab
The participants were also asked to estimate the genetic overlap between Caucasians and Arabs on a scale from 0% to 100%.
Participants in the narrative fiction group were more likely to choose one of the ambiguous options (mixed, more Arab than Caucasian or mixed, more Caucasian than Arab) and less likely to choose the categorical options (Arab or Caucasian) than those who read the synopsis. Even more interesting is the finding that the average percentage of genetic overlap between Caucasians and Arabs estimated by the synopsis group was 33%, whereas it was 57% in the narrative fiction group.
Both of these estimates are way off. The genetic overlap between any one human being and another human being on our planet is approximately 99.9%. Even much of the 0.1% variation in the human genome sequences is not due to 'racial' differences. As pointed out in a Nature Genetics article by Lynn Jorde and Stephen Wooding, approximately 90% of total genetic variation between humans would be present in a collection of individuals from any one continent (Asia, Europe or Africa). Only an additional 10% genetic variation would be found if the collection consisted of a mixture of Europeans, Asians and Africans.
It is surprising that both groups of study participants heavily underestimated the genetic overlap between Arabs and Caucasians, and that simply reading the fictional text changed their views of the human genome. This latter finding is also a red flag that informs us about the poor state of general knowledge of genetics, which appears to be so fragile that views can be swayed by nonscientific literary texts.
This study is the first to systematically test the impact of reading literary fiction on an individual's assessment of race boundaries and genetic similarity. It suggests that fiction can indeed blur the perception of race boundaries and challenge our stereotypes. The text chosen by the researchers is especially well-suited to defy stereotypical views held by the readers. The protagonist's Muslim husband was killed in the 9/11 attacks and she herself is being harassed by non-Muslim thugs. This may challenge assumptions held by some readers that only non-Muslims were the victims of the 9/11 attacks.
The effect of reading the narrative text seemed to have effects on the readers that went far beyond the content matter – the story of a Muslim woman who is showing significant courage while being threatened. The faces shown to the study participants were those of men, and the question of genetic overlap between Caucasians and Arabs was a rather abstract question which had little to do with Arissa's story. Perhaps Arissa's story had a broader effect on the readers. The study did not measure the impact of the narrative on additional stereotypes or assumptions held by the readers such as those regarding other races or sexual orientations, but this is a question that ought to be investigated.
One of the limitations of the study is that it assessed the impact of the story only at a single time-point, immediately after reading the text. Without measuring the effect a few days or weeks later, it is difficult to ascertain whether this was a lasting effect. Another limitation of this study is that it purposefully chose an anti-stereotypical text, but did not test the opposite hypothesis, that some fictional narratives may potentially foster negative stereotypes.
One of my earliest memories of an English-language novel about Muslim characters is the spy novel "The Mahdi" by the British author A.J Quinnell (pen name for Philip Nicholson) written in 1981. The basic plot is that (spoiler alert) US and British intelligence agencies want to manipulate and control the Muslim world by installing a 'Mahdi', the long-awaited spiritual and political leader of Muslims foretold by Muslim tradition. The ridiculous part of the plan is that the puppet leader is accepted by the Muslim world as the true incarnation of the Mahdi because of a green laser beam emanating from a satellite. The beam incinerates a sacrificial animal in front of a crowd of millions of Muslims at the Hajj pilgrimage and convinces them (and the rest of the Muslim world) that God sent this green laser beam as a sign. This novel portrayed Muslims as gullible idiots who would simply accept the divine nature of a green laser beam. One can only wonder what impact reading an excerpt from that novel would have had on the perception of race boundaries by study participants.
The study by Johnson and colleagues is an important contribution to the research of how reading can change our perceptions of race and possibly stereotypes in general. It shows that reading fiction can blur the perception of race boundaries, but it also raises a number of additional questions about how long this effect lasts, how pervasive it is and whether fiction might also have the opposite effect. Hopefully, these questions will be addressed in future research studies.
Image Credit: Saffron Woman by N.M. Rehman (generated from an attribution-free, public domain photograph)
Dan R. Johnson , Brandie L. Huffman & Danny M. Jasper (2014)
Changing Race Boundary Perception by Reading Narrative Fiction, Basic and Applied Social Psychology, 36:1, 83-90, DOI:10.1080/01973533.2013.856791
Excerpt of the literary fiction sample from "Saffron Dreams" by Shaila Abdullah
This is just an excerpt from the narrative sample used by the researchers, which was 3,108 words in length (pages 57-64 from the book):
"I got off the northbound No. 2 IRT and found out almost immediately that I was not alone. The late October evening inside the station felt unusually weighty on my senses.
I heard heavy breathing behind me. Angry, smoky, scared. I could tell there were several of them, probably four. Not pros, perhaps in their teens. They walked closer sometimes, and other times the heavy thud of spiked boots on concrete and clanking chains receded into the distance. They walked like boys wanting to be men. They fell short. Why was there no fear in my heart? Probably because there was no more room in my heart for terror. When horror comes face-to-face with you and causes a loved one's death, fear leaves your heart. In its place, merciful God places pain. Throbbing, pulsating, oozing pus, a wound that stays fresh and raw no matter how carefully you treat it. How can you be afraid when you have no one to be fearful for? The safety of your loved ones is what breeds fear in your heart. They are the weak links in your life. Unraveled from them, you are fearless. You can dangle by a thread, hang from the rooftop, bungee jump, skydive, walk a pole, hold your hand over the flame of a candle. Burnt, scalded, crashed, lost, dead, the only loss would be to your own self. Certain things you are not allowed to say or do. Defiant as I am, I say and do them anyway.
And so I traveled with a purse that I held protectively on one side. My hijab covered my head and body as the cool breeze threatened to unveil me. I laughed inwardly as I realized I was more afraid of losing the veil than of being mugged. The funny part of it is, I desperately wanted to lose my hijab when I came to America, but Faizan had stood in my way. For generations, women in his household had worn the veil, although none of them seemed particularly devout. It's just something that was done, no questions asked, no explanations needed. My argument was that we should try to assimilate into the new culture as much as possible, not stand out. Now that he was gone, losing the hijab meant losing a portion of our time together.
It had been just 41 days. My iddat, bereavement period, was over. Technically I was a free woman, not tied to anyone, but what could I do about the skeletons in my closet that wouldn't leave me alone?"
Excerpt of the Synopsis used by the researchers as a comparator:
This is the corresponding excerpt from the synopsis used by the researchers. The full-length synopsis was 491 words long:
"The scene starts with Arissa getting off the subway train. She is being followed. Most commuters have already returned home, so it is not the safest time to be traveling alone. Four people are walking behind her. Initially confused by the lack of fear in her heart, she realizes that it is the consequence of losing someone so close to her. It is ironic that she is wearing her hijab, a Muslim veil. She wanted to get rid of it when she came to America, but her husband, Faizon, insisted she keep it. Following his death, keeping the hijab was a way of keeping some of their time together. It has been 41 days since the attack, and Arissa's iddat, bereavement period, is over. She is a free woman, but cannot put aside her grave feelings of loss."
Monday, April 21, 2014
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
by Yohan J. John
No one knows exactly how life began, but a pivotal chapter in the story was the formation of the first single-celled organism -- the common ancestor to every living thing on the planet. I like to think of the birth of life as the creation of the first boundary -- the cell membrane. That first cell membrane enclosed a drop of the primordial soup, creating a separation between inside and outside, and between life and non-life. Through this act of individuation the cell could become a controlled environment: a chemical safe zone for the sensitive molecular machinery needed to maintain integrity and facilitate replication. The game of life consists in large part of perpetuating the difference between inside and outside for as long as possible. Death, then, is the dissolution of difference. But the paradox at the heart of life is that the inside cannot survive without the outside. The cell requires raw materials -- nutrients and energy -- to sustain itself and to reproduce, and these must be sought outside the safe zone, in the wild and unpredictable outside world.
The cell membrane has a dichotomous role. It must preserve the cell’s identity as an entity that is distinct from everything outside it, but it must not be an impenetrable wall. It must be a gateway through which the cell can absorb raw material and eject waste, but it cannot allow the inside to become inundated by the outside. It fulfills this challenge by being selectively permeable, carefully overseeing the traffic between the inside and the outside. The cell membrane must also be flexible, because it serves the roles of locomotion and consumption. In a single-celled organism, the cell membrane is therefore a primitive sense organ, a transportation system and a digestive system, all rolled into one.
The birth of life was a moment of cleaving: when the first cell membrane enveloped its drop of primordial ooze, it cleaved the inside from the outside, but it also became the conduit through which the inside could cleave to the outside. Like Janus, the two-faced Roman god of beginnings and endings, of doors and passageways, the cell membrane is a sentry looking in two directions simultaneously. Given its role in cellular transaction, transition and transformation, the cell membrane’s function might even be described as a precursor to intelligence.
The connection between boundaries and intelligence may run quite deep. In multicellular organisms like humans, the skin is the boundary between inside and outside. Skin cells, as it turns out, are related to neurons. During embryonic development, cells in the ectoderm, which is the outermost layer of the embryo, gradually differentiate to become the cells of the skin and the nervous system. (Researchers have recently found ways of turning skin cells into neurons, suggesting that the line between these two kindred cells may be somewhat permeable.) The skin of a multicellular organism is much like the cell membrane of a single cell: it separates inside from outside, providing a physical boundary for the organism. But the inkling of intelligence in that first semipermeable membrane finds its full expression in the nervous system, which patrols a very different sort of boundary: the line between predictable and unpredictable, between known and unknown.
Life is an obstacle course full of things an organism needs or desires, like food and shelter, and things it would prefer to avoid, like predators or foul weather. Maximizing the good while minimizing the bad requires being able to use patterns in the environment to anticipate what is going to happen. Plants must be sensitive to the rhythmic pattern of the seasons. Animals in turn must predict the patterns of plants and other animals. The evolution of the central nervous system -- the brain and the spinal cord -- was a great leap forward in the pattern-recognition capabilities of living things. The ability to recognize and categorize the patterns in nature and use them to survive and thrive is central to intelligence. It allows living things to find (and create) islands of order and stability in a swirling sea of change and uncertainty.
But it’s dangerous to just stay put once you’ve found an island of order. Resources are limited and change is the only constant -- the boundary between the solid ground of reliable knowledge and the encircling sea of unpredictability is in a state of flux. Nature seems to always find a way of casting us out of the gardens of Eden we create or discover . A pattern-seeker must be vigilant, staying on the lookout for unforeseen dangers and new opportunities. This vigilance takes the form of exploration, and even very simple animals do it. Insect colonies have specialized scouts that search for fresh sources of food. Introduce a new object into the cage of a lab rat, and the first thing it does is investigate it thoroughly.
We tend to describe the behavior of animals behavior in purely utilitarian terms. The exploratory behavior of rats, or birds, or bees, is just a combination of foraging for food, looking for mates, and keeping an eye out for predators. When it comes to human culture, however, utilitarianism can often seem like a bit of a stretch. Is it fear or hunger that drives people to investigate the depths of the ocean, or the far reaches of space?
We humans get bored on our islands of order, even though we need them for our survival and sanity. We also like to sail off into the unknown from time to time. What constitutes the unknown varies from person to person -- it’s not just scientists or philosophers that contend with it. Only a fraction of the world’s population has the inclination and the good fortune to experience first hand the outer limits of scientific knowledge, but a far larger number of people can contend with the boundaries of their worldviews in the domains of art and culture. The edge is where the action is -- on the beach where the chaotic sea meets the tranquil beach. But what is it that drives us to the experiential edge in the first place? And does it have anything in common with the forces that drive living things out of their comfort zones in search of sustenance?
The difference between a desire and a drive is that a desire subsides when the goal is reached, whereas a drive is independent of the attainment of the goal -- the act of striving becomes pleasurable in itself. Living beings have a variety of desires that can be temporarily satiated, but the lust for life is a drive, not a desire. In the long run life appears to revel in the very attempt to perpetuate itself. Intelligent beings, meanwhile, seem to revel in the attempt to expand their islands of order, fighting back the lapping waves of the unknown.
We have a name for the drive towards the unknown -- it’s called curiosity. Jürgen Schmidhuber, an artificial intelligence researcher, has a theory of “computational aesthetics” that offers us a vivid mathematical analogy for curiosity. The theory can be summed up in one bold assertion: that interestingness is the “first derivative” of beauty. Readers who detect a whiff of scientific imperialism will hopefully bear with me as I unpack this idea, which need not be taken as anything more that playful speculation. I admit, colloquial and intuitive concepts like “beauty” or “interestingness” often get bent out of shape a bit when scientists examine them, but this is not necessarily a bad thing. Sometimes we need to distance ourselves from our intuitions to discern their outlines more clearly.
According to Schmidhuber’s computational theory of aesthetics, the subjective beauty of a thing is defined as the minimum number of bits required to describe it. Since descriptions vary from person to person, beauty is in the eye of the beholder. A definition of beauty based on bits of information is not in itself particularly alluring, but it can be improved if we see it as an attempt to capture subjective simplicity or elegance. It is perhaps unsurprising that a scientist’s definition of beauty has much in common with Occam’s Razor. 
However, beauty is not necessarily interesting. We also seek the shock of the new, the excitement of the unusual. So Schmidhuber goes on to define interestingness as the rate of change of beauty -- the time-derivative of the subjective description length. A derivative measures the rate of change of one thing with respect to something else. The time-derivative of distance is speed (the rate at which your distance from some point changes), and the time-derivative of speed is acceleration (the rate at which your speed changes). For something to be interesting then, the observer’s ability to describe it must change with time. So interestingness is a dynamic quality, whereas a thing can be beautiful even if it never changes.
Some examples will help us understand what this means. Most people will agree that staring at a blank screen is quite a boring experience. A blank screen is extremely simple from an information-theoretic perspective, and so its description length will be very short. The description might be something like “Every pixel is black”. There is clearly a pattern, but it’s trivially simple. The information on a blank screen can be easily compressed. White noise sits at the other extreme. Somewhat counter-intuitively, information theory tells us that random noise is rich in information, so it’s description length is extremely long. Totally random information cannot be compressed. An accurate description of white noise on a screen would require specifying what is happening in each and every pixel. If a pattern is something that has structure and internal coherence, then randomness is the absence of pattern. Most people find random white noise boring too. What people find interesting lies somewhere in the middle -- between what is too easily compressed, like a blank screen, and what is totally incompressible, like white noise. We like patterns that are simple, but not too simple; complex, but not incomprehensibly so.
Schmidhuber’s theory is couched in the language of computer science and artificial intelligence, which is why the concept of data compression plays such a prominent role. We don’t really know if the brains of humans and animals compress experience in the same sense that a computer algorithm does. But we do know that living things use pattern-recognition to make useful predictions about their environments. We compare the patterns we’ve encountered in the past with our present experience, and try to anticipate the future. We categorize the patterns we encounter -- poisonous or edible, sweet or bitter, friend or foe -- so that if we encounter them again, we know how to react. Rather than compressibility per se, perhaps what we find interesting is the possibility of enhancing our categories so they encompass more of our experiences. Knowledge consists of having comprehensive categories for as many experiences as possible, and knowing how to respond to each category.
What might interestingness look like? Let me describe a toy system that is confronted by something unexpected, and shows a spurt of interest. Let’s say we have a system that is experiencing something beautiful. The subjective beauty “B” can change over time. In the diagram above, beauty is the blue line, and it stays boringly constant for a while, but at the halfway point it suddenly changes. Imagine a pleasant but predictable movie that suddenly becomes unpredictable in the middle. The beauty increases! The system has an expectation “E” which in our toy system is a memory of the past value of B. The red line in the diagram is the expectation. The green line represents the interest level “I”, which depends on the difference between the beauty and the expectation. When expectation and reality don’t line up, the value of E is different from B, so the system’s interest level shoots up. But eventually E gets accustomed to the new value of B, and the interest level goes back to zero. If the system had perfect expectations and could perfectly predict the change to the value of B, then there would be no increase in the interest level. A curious system is addicted to these bursts of interest, and actively seeks them out. 
As it turns out, the brain’s dopamine neurons fire in bursts of this sort when something unexpectedly good happens. Researchers call this a “reward prediction error” signal, and it is one of the reasons many people think of dopamine as the “pleasure chemical”. But this misses a subtlety -- if the pleasure is completely predictable, the dopamine cells don’t fire. This dopamine cell pattern is more of a novelty signal than a pleasure signal. (There seem to be several other things that dopamine does, so even calling it a novelty chemical is an oversimplification.) Neural network theorists often employ the dopamine burst as a “reinforcement signal” that allows a network to learn from experience and improve its ability to categorize and predict. 
As we simplify, expand and refine our categories we push forward the boundary between what we understand and what we still don’t quite have a handle on. We expand our islands of order, reclaiming land from the sea of unpredictability. Many of the categories humans obsess about have little or nothing to do with the struggle to survive. Curiosity pushes us to proliferate our aesthetic categories -- and in extreme cases it leads to the infinitessimal parcellations of genre and sub-genre that the internet so effectively reveals and encourages. (I invite the reader who does not know what I am talking about to examine the various sub-genres of heavy metal music.)
Curiosity is the drive towards interestingness, and it brings us to the boundaries of what we understand. A trip to a modern art museum should adequately establish that we don’t just find any baffling experience interesting. We seek experiences that are in the sweet spot -- not totally predictable and monotonous, but not random and formless either. During an interesting experience we don’t know exactly what is going on, but we get the feeling that meaningful resolution is but a few moments away. So a Hollywood blockbuster that is too formulaic and predictable is not very interesting, but an experimental art film with no formula at all can bore us to tears too. We like movies with a few twists -- but in order to recognize them as twists we have to have some expectation of what normally happens. A really interesting movie flirts with the boundary between what we know well enough to anticipate, and what surprises and confounds us.
So how does curiosity help us “compress” or improve our categories? Think of the concept of genre. In order to get a subjective sense of what a genre is, you need to experience many examples. Curiosity is what draws you towards this experience. Even if you go to Wikipedia or tvtropes.com and read up on the conventions of a given genre, you still need first-hand experience to understand how those conventions manifest themselves. You need to listen to several blues songs before you can be sure you know what the basic blueprint is. And the more you listen, the more musical structure you can perceive and predict. Once you understand the conventions -- once you know what to expect -- you can experience a burst of interestingness when someone subverts those conventions and confounds your expectation. A blues aficionado is well placed to appreciate the way a band like Led Zeppelin reinterprets the genre’s conventions. In the experience of such aesthetic subversion, you are once again confronted by what is strange and unpredictable, and the curiosity engine becomes fired up once more.
What drives people to police their subjective aesthetic boundaries so zealously? What makes people so concerned with questions of authenticity or originality in art and music? I think going back to the cell membrane might give us some ways to think about such questions. The cell membrane separates inside from outside, mediating interactions between the two. In maintaining a chemical difference between the inside and the outside, it preserves the identity of the cell as an entity that is distinct from the environment. Perhaps aesthetic boundaries -- and mental boundaries more generally -- are central to our notions of identity. To carve out a distinct identity is to maintain a difference between an in-group (which could be just one person) and an out-group. Just as the cell membrane defines the contours of the cell, artistic and intellectual boundaries may define the contours of a personality, or of a community. For people whose identities are wrapped up in difference, to merge with the mainstream might seem a kind of cultural death: a dissolution of the boundary that sustains individuality and identity.
Staying on the boundaries of what is familiar in order to find sweet spots of interestingness allows us to expand our experiential horizons and reaffirm our existences as distinct individuals. But this can also be quite a tiring experience. What is true for a cell is true for an individual, and perhaps even for a culture -- maintaining a boundary takes energy! Most of us aren’t critics -- we can’t spend all our time refining our categories of experience, or sustaining idiosyncratic differences of taste and opinion. Sometimes we need to return to our comfort zones and replenish our supplies. Visiting a museum, for instance, is an experience that can be simultaneously interesting and mind-numbing. (In this age of endless online novelty, I can’t be the only one who seeks out tried and tested experiences -- comfort food, old familiar songs, trashy television -- as an antidote to too much interestingness!) Perhaps merging with the mainstream from time to time is not such a bad thing.
Individualism is taken as a self-evident virtue in modern liberal societies. But given all the effort involved in maintaining the boundary between inside and outside, between the Self and the Other, the opposite movement can be an act of liberation: dissolving the Self by forgoing, for a time, the maintenance of difference. Consider those moments during a sporting event (like a Wave) or a musical gathering (like a Rave) when everyone is moving in unison. It seems as if there is a kind of ecstasy in this voluntary surrender of individuality and difference.
Aesthetic experience, then, is a twofold process. On the one hand, it leads us to curiosity and wonder, which draw us away from our islands of certainty, transforming the contours of our selves. On the other hand, it offers us dissolution and union, which pull us back from the margins, towards community and commonality. Perhaps the dance of aesthetic experience is a microcosm of the great dance of life -- a dance that began with the undulations of that first cell membrane. We sway in the direction of the unknown, and then drift back to the comfort of the known.
Notes and References
 The Genesis story of the fall from grace tells of how man and woman were cast out from the Garden of Eden. In The Power of Myth, Joseph Campbell interprets the story as follows: “Whenever one moves out of the transcendent, one comes into a field of opposites. One has eaten of the tree of knowledge, not only of good and evil, but of male and female, of right and wrong, of this and that, and of light and dark.” Campbell’s “field of opposites” is where pattern-recognition and categorization happen -- it is the field of boundaries and differences, and also of self-consciousness. And this field is no paradise, because it is constantly threatened by the unfamiliar and the unpredictable.
 Jürgen Schmidhuber summarises his theory of aesthetics in a paper entitled “Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes”.
 The diagram shows the results of a little simulation I coded up in Python. It’s a rudimentary “differentiator” that compares the present reality (B) with the recent past (E), and constantly updates its expectations (E). The burst of interest (I) happens during the transient period when reality exceeds expectation (when B > E). Many simple models of dopamine cells use a similar principle. Similar mechanisms can also be employed for edge-detection in a visual image, a crucial stage in object recognition. The system I demonstrate is pretty rudimentary -- it just expects the present to resemble the recent past. You could say that a major goal of artificial intelligence and computational neuroscience is to create systems that have refined, flexible expectations with which to anticipate reality.
 Perhaps the hype cycle represents a burst of curiosity at the societal level. And perhaps social media frenzies are the dopamine bursts of the internet’s hive mind?
My Genome Report Card
by Carol A. Westbrook
Less than 100,000 people in the entire world have had their genome sequenced. I am now one of them. As I wrote in 3QuarksDaily in December, I went into this with some trepidation--you never know what bad news lurks in your genome! I promised to give a report of my results, and here it is.
To get my genome sequenced, I enrolled in Illumina's "Understand Your Genome" Program. Illumina is one of the few companies licensed by the FDA to perform whole genome sequencing (WGS) for medical diagnosis--other consumer products such as Ancestry.com, National Geographic's Geno 2.0, and 23andMe, provide only a limited analysis. I sent in a blood sample in November, and in February received a detailed analysis by Illumina's genetic counselors. In March I attended the "Understand Your Genome," conference, where I received an iPad with my WGS uploaded into the "MyGenome" app, training on the use of the app, and a fascinating daylong seminar which explored the interpretation and medical uses of genome sequences. My daughter, a medical student, attended the program with me.
Viewed on the iPad, my genome sequence consists of two similar but not identical, parallel lines of the letters, one from each chromosome. There are only 4 letters, A,C,G, and T, representing the four DNA nucleotides that are aligned to make the sequence. A human sequence is about 6 billion nucleotides long, with half inherited from one parent and half from the other, and a few new mutations that arose on their own, probably less than 100. Thus, from a family perspective, a person's DNA sequence is 50% identical to each of his parents, children or siblings, 25% identical to grandparents, grandchildren, and so on to my distant relatives. My genome is very similar to every other person's, but it is not identical to anyone's. No one has ever had the same DNA as me, and never will -- it is what makes me uniquely me.
How different am I from everyone else? My genetic analysis showed that I have 3,524,186 individual nucleotide differences, from the "average" genome to which it was compared, reference genome hg19, NCBI build 37. This is about 0.05% variation, which is typical for most people. To put this in perspective, if you were to compare my DNA to that of our two most closely-related primate species, bonobos and chimpanzees, the differences would be over 4%; when comparing me to Neanderthal man, however, you would find only 0.3% variation. So 0.05% is small enough to make me human, but large enough to make me a unique individual.
Of the 3 million variants in my genome, only about 13,000 were found that produce change in the protein coding sequences of genes, impacting on 1,222 "conditions" (diseases or traits). The great majority of these changes were considered to be "benign," meaning they been validated not to cause disease, or they were "variants of unknown significance, " or VUS. A VUS have not been linked to disease, but it has not been excluded, either; many of these VUS's will become clear as more genomes are sequenced and the database expands. We are not sure what to make of the other 3,511,186 variants that occur outside of genes--some may be significant but most are probably silent passengers that were picked up during evolution. Again, we'll learn more as the database expands.
Of the1,222 conditions for which I have variants, only 4 are significant. Three are genes for recessive diseases, which makes me only a carrier, since you need two copies to have a recessive disease. Two these genes, galactosemia and Bardet-Biedl syndrome, are very rare debilitating diseases of children. My own children have a 50% risk of being carriers, though it is very unlikely that their partners are carriers too, so there is little risk that their future children will have the disease. They could be tested prior to having my grandchildren. The third recessive gene is hemochromatosis, a disease of iron overload, which is easily treated in its early, silent stages, but can cause liver cirrhosis if it is not. The hemochromatosis gene is quite common, as one in 200 people of European background are carriers. In fact, it is possible that some of my relatives may actually have the disease; fortunately for them, hemochromatosis is easily diagnosed with a blood test for ferritin, or an inexpensive DNA test.
My surprising result was that I have both recessive genes for TPMT deficiency which, strictly speaking, is not a disease but is a metabolic variation in drug metabolism. A deficiency in TPMT or "thiopurine S-methyltransferase" makes me unable to metabolize three medications: 6-mercaptopurine, 6-thioguanine, and azathioprine. If I take one of these medications I would get deathly ill; fortunately, these are drugs only used for leukemia treatment or transplant. I will keep this in mind should I ever need them. About 0.3% of the population also has TPMT deficiency.
Now on to the diseases which develop later in life, what I call the "AARP diseases." Many participants opted out of learning whether they have one of these scary genes, but I had already decided that I wanted everything revealed. For cancer risk, I was pleased to find that I don't carry any of the known genes. I was also relieved to find that I don't carry any of the known genes for neurologic conditions, in particular the genes for Parkinson's disease, which affected my late mother when she was in her 80's. I also do not carry the genes for early-onset Alzheimer's dementia. Illumina does not analyze for late-onset Alzheimer's dementia, which is the more common form that attacks older adults, though we were given the coordinates if we wanted to check on our own. To do this I used the MyGenome app and punched in the WGS location. I found that I have one copy of APOE-4 -- increased risk -- and one copy of APOE-2-- protective. My risk, then, is neutral. Whew! Looks like I lucked out in the AARP diseases.
That, in a nutshell, is my genome report. Was it valuable? Absolutely. The value to me was not in learning what I have, but what I don't have. I was reassured that I am reasonably healthy, and likely to be so for a few more years. I don't have an increased cancer risk, I don't have a tendency to blood clots. Except for TPMT deficiency I don't have any drug metabolic variants, which means my risk of unexpected side effects from medication is low. My health care costs are likely to remain lower than average and I will probably go on being healthy for a long time. These conclusions will influence both my health insurance choices and my financial planning for retirement.
You can begin to see the impact that WGS might have on your own health, as well as on your health care costs. Today there are only two medical uses for WGS that are accepted and reimbursed by insurance: the identification of unknown diseases of children, and cancer genome analysis for chemotherapy targets--and the cancer use is still not widely accepted. But there are many more ways we could improve medical care with WGS. Imagine the complications and deaths that would be avoided, and the wasted health dollars that would be saved, if your pharmacy had a list of your drug metabolism variants so they could identify--in advance--if you are likely to have serious side effects, or if a particular drug won't be effective for you. We could actually do this today! And if a person knew in advance he had a tendency to some diseases and not others, he could focus his health care dollars on screening and prevention strategies where they will have the most impact. This will be even more relevant as our knowledge base expands.
I cannot recommend WGS to everyone -- yet--but it's in our future, especially as the price is expected to drop below the $1000 mark, less than the cost of a single CAT scan. At present, too few genomes have been sequenced and correlated with medical information to be able to interpret much of what is present in a WGS. This will change over the next few years. There are projects throughout the globe that are doing just this, such as the 100,000 Genomes Project in the UK and The Million Human Genomes Project in China. In the US, the Personal Genome Project is collecting sequences such as mine to do these studies. The potential impact of WGS technology is enormous, as it will lead to more effective, personalized treatment of disease and, more importantly, to better health.
At some time in the not-too-distant future, everyone will have his or her own WGS. I'm pleased to be an early adopter.
Monday, March 31, 2014
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88
Monday, March 03, 2014
Pale Terraqueous Globes
by Alexander Bastidas Fry
Imagine the closest star beyond the Sun has a planet orbiting it about the size of Earth. Visualize what your sunset would look like on this distant planet. Perhaps there would be two stars at the center of this solar system. Your sunset would be breathtaking. You could even visualize what the Sun would look like from this planet – just another unassuming star in the sky. You don't have to merely imagine that such a planet might exist. A planet like this really does exist – of course you'd still have to imagine the part where you are on the surface of this world. The Alpha Centauri star system, which is essentially a triple star system of Alpha Centauri A, Alpha Centauri B, and Proxima Centauri has just such a planet. There is a planet in the sky waiting for us at a distance that is just two hundred and seventy thousand times further than the Earth is from the Sun. This planet is near 1500 degrees on the surface, so we wouldn't want to be there, but nonetheless the fact is that astronomers are finding similar planets commonly. There may be a planet just the size of Earth at a nice temperature quite near us galactic speaking. We are searching.
Most planets don't seem to be much like Earth. In fact so far we haven't found a single planet that has a temperature and size similar to Earth, but part of the problem with finding planets is that finding big giant planets – like Jupiter is easy – while small rocky planets like Earth are elusive. But we are on the edge of discovery. All in all Earth-like planets likely abound. In fact with 95% confidence there is an Earth size planet in the habitable zone of a small star within 23 light years of us. The habitable zone is the place where a planet would not be too hot or too cold. A place where a planet wouldn't see its oceans boiled off or frozen into desolate ice tundra. Habitable planets are common in our galaxy and by galactic standards not very far apart. On average Earth-like planets are only 13 light-years apart.
Just a few years ago we knew very little about the characteristics or numbers of planets beyond our solar system—the unknown extrasolar planets. Today we know that most stars are host to at least one planet. This is revolutionary: not only do other stars with other planets exist, they are downright common. This new information was harvested by the Kepler space telescope. It systematically surveyed 145,000 stars in the direction of constellation Cygnus for the past four years. This careful survey allows us statistically extrapolate the occurrence of planets for each of the hundreds of billions of other stars in the Milky Way. We have observed on average that each star in the Milky Way has more than one planet. There are several ways to detect or infer the presence of extrasolar planets. The most common and useful methods to date for detecting stars are the radial velocity detection and transit detection.
The radial velocity method of detecting stars relies upon Newton's Third Law of motion: every force has an equal and opposite force. So as the Earth, or any planet, swings around a star the star also swings around the given planet. If that movement is radial (parallel) with our line of sight then we can observe the precise variation in wavelength of light emitted by the star (the Doppler shift) over time to infer the existence of a massive object, like a planet, orbiting that star. Stars moving towards or away from us at speeds of as little as 1 meter per second can be found using the radial velocity technique.
The transit method of planet detection is looking for minor eclipses. We detect planets by watching them eclipse their host star, or transiting across the face of the host star. Such eclipses are far from total and are exceptionally hard to notice. This is the method that the Kepler telescope utilizes. If a star's light dims or brightness we make take notice, but it could be a stellar flare, a binary star companion, noise in the data, or a myriad of other effects. But if the star dims by the same amount over a repeated period then we can take this evidence to deduce a planet may be orbiting that star.
On February 26, 2014 the Kepler Satellite science team revealed observations of 715 new planets based on data that had been taken over the last four years. Currently the Kepler satellite is in a bit of a bind. The reaction wheels that allow it to precisely orient itself in space have failed. So these new planets coming in are just analysis of data on hand and there is no fix for Kepler in sight. The next frontier in extrasolar planet detection relies upon a technique that has not been possible before: astrometry or the precise measurement of the position of stars. Stars move when tugged upon by planets that may be orbiting it (this is the same as in radial velocity method, but here the movement is in the plane of the sky). Less than a month ago the Gaia space telescope settled into its orbit where it will soon begin to observe and pinpoint the position of nearly a billion stars. We don't expect that most of these stars will have a detectable planet (yes they may have planets, but not detectably so), but we expect to find some strange worlds for sure.
When we think about other planets we might have to change our expectations somewhat. Most stars are slightly less massive and cooler than our sun; thus for the planet to be of the same surface temperature as that of Earth the planet would need to orbit closer. It would be a shorter year. And of course unlike science fiction it isn't likely a planet would have an atmosphere that would be comfortable to Earth accustomed organisms. The Earth's atmosphere is generated by the collective activity of trees and green algae in the ocean. Ultimately researchers have urged us to consider that planets are so unique that habitability should be evaluated on a case by case basis. In fact, even the Earth's long-term habitability is in question. The Earth exists precariously close the inner edge of the habitable zone, where it might one day be too warm for comfort. In fact in billions of years the Sun will most certainly heat up and expand to the point that Earth will be a poor place to look for life.
Philosophically it feels as if the existence of other Earth-like planets is monumental. Yet given the distances to these objects it is hard to fathom any tangible consequence for generations to come. François-Marie Arouet, better known by his pen name Voltaire, was a natural philosopher who was one of the first people to consider with objective reason what other planets would perhaps be like. Voltaire's style was a mix of story and inquiry as was common at the time. One particular short story Voltaire wrote alluded to the myriad of planets that he speculated about may exist. The story is Memnon, the Philosopher of Human Wisdom Memnon, the Philosopher of Human Wisdom. It tells the story Memnon who decides to become a philosopher one day and upon that same day he loses his eye, his health, his fortune, and his reason. He passes into sleep in despair at the end of the day and is visited by a celestial spirit in a dream. The spirit says that things could be worse, in fact the spirit states that there are a hundred thousand million worlds and in each world there are degrees of philosophy and enjoyment, but each world has less than the next; there is a world of perfect philosophy and enjoyment somewhere the spirit implies, "There is a world indeed where all [perfection] is possible; but, in the hundred thousand millions of worlds dispersed over the regions of space, everything goes on by degrees. There is less philosophy, and less enjoyment on the second than in the first, less in the third than in the second, and so forth till the last in the scale, where all are completely fools." Memnon is afraid that the Earth must be on the low end of the list and replies, "our little terraqueous globe here is the madhouse of those hundred thousand millions of worlds," a statement which predates and echoes Carl Sagan's sentiments on our pale blue dot.
Voltaire seems to have identified something fundamental about the existence of other planets: there presence is not enough. For example most rocky planets we find lack any atmosphere we would find acceptable. There is no reason to think that other planets have atmospheres that humans could survive in, in fact oxygen is in a non-equilibrium state on most planets due to surface geological activity. Even if a planet has an atmosphere it may quickly fade unless the right conditions on the planet are maintained. Our oxygen rich atmosphere was primordially generated by the collective action of Cyanobacteria in the oceans some 2.4 billion years ago. This great oxygenation event effectively poisoned previous incarnations of life on the planet, but gave rise to rapidly respiring, and thinking, creatures we know today. Perhaps cyanobacteria could be seeded into the oceans of barren planets in the habitable zone and in time the atmosphere of that planet would have enough oxygen for us to breathe easy. Or maybe we could find a planet with oxygen in the atmosphere already. If astronomers ever detect the spectral signature of oxygen on an exoplanet we could optimistically infer there are respiring plants, and maybe even creatures, on the planet. Such a detection would be carried out in a similar way in which we detect the spectral signature of elements in distant stars, but because planets are so dim it may take a truly monumental telescope of perhaps one hundred meters in diameter to achieve such a detection. Even if there was an atmosphere there are still issues of temperature, seasonal variation, geological activity, natural resources, weather, and perhaps even conflict with the natural residents of the planet. The moral implications of visiting a thriving planet give rise to comparisons to colonization.
We don't have to pretend there are planets beyond Earth to visit anymore. We live in a universe, or at least a galaxy, that has given us an embarrassment of richness in planetary diversity, however there is no guarantee that any of the planets beyond Earth are better than Earth.
Monday, February 24, 2014
Does Beer Cause Cancer?
by Carol A. Westbrook
I have been taken to task by several of my readers for promoting beer drinking. "How can you, a cancer doctor, advocate drinking beer, " I was asked, "when it is KNOWN to cause cancer?" I realized that it was time to set the facts straight. Is moderate beer drinking good for your health, as I have always maintained, or does it cause cancer?
Recently there has been some discussion in the popular press about studies showing a possible link between alcohol and cancer. As a matter of fact, reports linking foods to cancer causation (or prevention) are relatively common. I generally ignore these press releases because they generate a lot of hype but are usually based on single studies that, on follow-up, turn out to have flaws or cannot be confirmed; the negative follow-up study rarely receives any publicity. Moreover, there are often other studies published at other times showing completely contradictory results; for example, that red wine both prevents and causes cancer.
Furthermore, there is a great deal of self-righteousness about certain foods, and this attitude can cloud objectivity and lead to bias in interpreting the results; often these feelings have strong political implications as well. Some politically charged dietary issues include: vegetarianism; genetically modified crops; artificial sweeteners; sugared soft drinks. Alcohol fits right into this category--remember, we are the country that adopted prohibition for 13 years. There is no doubt the United States has significant public health issues related to alcohol use, including alcohol-related auto accidents, underage drinking, and alcoholism, and the consequent problems of unemployment, cirrhosis of the liver, brain and neurologic problems, and fetal alcohol syndrome. Wouldn't it be great if the government could mandate a label on every beer can stating, "consumption of alcohol can cause cancer and should be avoided"? Wouldn't that be a wonderful "I told you so!" for the alcohol nay-sayers?
Before going further, I will acknowledge that are alcohol-related cancers. As a specialist I am well aware that cancers of the head and neck area, the larynx (voice box) and the esophagus are frequently seen in heavy drinkers, almost always in association with cigarette smoking. Liver cancer is seen primarily in people with cirrhosis--also a result of heavy drinking. In both instances, the more alcohol that is consumed, the greater the risk of developing one of these cancers--and I have rarely seen these cancers in non-smokers or non-drinkers. But assuming that my readers are not alcoholics, the question that they are really asking is whether or not they are going to get cancer from low to moderate beer drinking.
So what, then, are the facts? Does beer cause cancer? This is a much more difficult question to answer than most people realize, and can easily be the subject of years of study for a PhD dissertation (and probably has been). Researchers will be quick to admit how difficult it is to do scientifically rigorous studies on the health effects of individual dietary components. You can't just take a group of thirty year-olds, split them into two groups, give beer to one group and make the other abstain, watch them for 20 years and see who gets more cancer. So we have to rely on population studies, estimating alcohol consumption based on purchasing statistics, self-reporting of drinking (which is often unreliable), surveys, and death certificates for cancer. Incidentally, beer is not considered separately from other alcoholic beverages in any of these studies.
For example, an interesting study by Holahan and colleagues, published in 2010 in the journal Alcoholism: Clinical and Experimental Research, followed 1,824 middle-aged men and women (ages 55–65) over 20 years and found that moderate drinkers lived longer than did both heavy drinkers and teetotalers. In particular, their data suggested that non-drinkers had a 50% higher death rate than moderate drinkers (1 - 2 drinks per day). Others have criticized this conclusion because the no-alcohol group included people who didn't drink because they were already at a higher risk of death for other reasons such as serious medical conditions, previous cancers, or they were former alcoholics on the wagon. The authors claimed that they controlled for these variables but that is almost impossible to do, and that is one of the reasons that it is difficult to get accurate data from this kind of study. So it may be hard to conclude that moderate drinking significantly increases your lifespan, but it certainly doesn't shorten it.
What about cancer? The publication that started the most recent hype about cancer and alcohol appeared in the April 2013 issue ofThe American Journal of Public Health, and was written by David Nelson MD, MPH and his colleagues. They combined information from others' publications with epidemiologic surveys to determine the number of cancer deaths attributable to alcohol, as well as the types of cancer that were associated. They found that about 3% of all cancer deaths in the US were related to alcohol consumption, with most of it seen in the head and neck, larynx and esophagus. There was still a slight increased risk at low alcohol use (greater than 0 but less than 1 1/2 drinks per day), which led them to conclude, "regular alcohol use at low consumption levels is also associated with increased cancer risk." I looked at their study, and couldn't argue with their conclusion, but I don't think the risk is significant enough to recommend becoming a teetotaler.
Neither does the US National Cancer Institute (NCI). Heavy drinking aside, the NCI does not recommend that people discontinue low or moderate drinking since it would have only a minimal impact on their chance of developing cancer. Some caution is indicated for specific cancers: There is a 1.5 times increased risk of breast cancer in women who drink more than 3 drinks per day compared to non-drinkers; similarly, the risk of colon cancer is 1.5 times increased in people who more than 3.5 drinks per day. Incidentally, 3.5 drinks per day is still well above the level that is considered "low to moderate" drinking, which is usually defined as no more than 1 drink per day for a woman, 2 per day for a man. That being said, lowering your alcohol consumption deserves some consideration if you are anxious to change your odds for these two specific cancers. Nonetheless, the risks from alcohol are still low when compared to the impact of other lifestyle factors. Addressing these factors will have a much greater impact than giving up that beer or wine with your dinner: don't smoke, lose weight if you are over; exercise; eat a high-fiber diet; increase your vegetable and fruit consumption, while limiting red meat; avoid processed food; follow-up on your doctor's cancer screening recommendations for colonoscopy, pap smears, mammography and prostate screening.
Do the positive effects of drinking beer outweigh the negative effects? Moderate alcohol consumption has been reported to lower the risks of heart disease, stroke, hypertension and Type 2 diabetes; for men, it may lower the risk of kidney stones and of prostate cancer; may improve bone health; may prevent brain function decline. Alcohol consumption actually lowers the risk of kidney cancer and of lymphoma. Overall, in most studies, the positive effect was very small, but the beneficial effects of beer are only in moderate drinking, not for those who drink to excess. And of course, there are social and psychological benefits to sharing a beer with friends.
So, is beer drinking good for you? Or bad? Are you healthier if you drink, say, a beer or two per day, or are you worse off? My conclusion as a medical specialist is: it depends. On average, for the general population, drinking a little alcohol is better than abstaining completely. But on an individual basis, it depends on your current health conditions and your risk factors. Are you more likely to die of heart disease or of colon cancer? And if you want to cut down your risk of either condition you must be sure to avoid cigarettes, keep your weight down, exercise, eat a high-fiber diet that is low in red meat and processed foods, and increase your fruit and vegetable intake. The impact of alcohol consumption is likely to be small compared to these lifestyle changes.
What does the Beer Doctor do? As a cancer specialist, my lifestyle includes all of the above recommendations on exercise, weight and diet. I continue to enjoy my beer, but I keep my consumption within the low to moderate range, that is on average about 0.5 to 1 per day, and not every day. For me, the health benefits of drinking beer outweigh the negatives. To your health!
© 2014, Carol Westbrook. This article is from my forthcoming book, To Your Health! The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
Monday, January 06, 2014
Synthetic Biology: Engineering Life To Examine It
by Jalees Rehman
Two scientific papers that were published in the journal Nature in the year 2000 marked the beginning of engineering biological circuits in cells. The paper "Construction of a genetic toggle switch in Escherichia coli" by Timothy Gardner, Charles Cantor and James Collins created a genetic toggle switch by simultaneously introducing an artificial DNA plasmid into a bacterial cell. This DNA plasmid contained two promoters (DNA sequences which regulate the expression of genes) and two repressors (genes that encode for proteins which suppress the expression of genes) as well as a gene encoding for green fluorescent protein that served as a read-out for the system. The repressors used were sensitive to either selected chemicals or temperature. In one of the experiments, the system was turned ON by adding the chemical IPTG (a modified sugar) and nearly all the cells became green fluorescent within five to six hours. Upon raising the temperature to activate the temperature-sensitive repressor, the cells began losing their green fluorescence within an hour and returned to the OFF state. Many labs had used chemical or temperature switches to turn gene expression on in the past, but this paper was the first to assemble multiple genes together and construct a switch which allowed switching cells back and forth between stable ON and OFF states.
The same issue of Nature contained a second land-mark paper which also described the engineering of gene circuits. The researchers Michael Elowitz and Stanislas Leibler describe the generation of an engineered gene oscillator in their article "A synthetic oscillatory network of transcriptional regulators". By introducing three repressor genes which constituted a negative feedback loop and a green fluorescent protein as a marker of the oscillation, the researchers created a molecular clock in bacteria with an oscillation period of roughly 150 minutes. The genes and proteins encoded by the genes were not part of any natural biological clock and none of them would have oscillated if they had been introduced into the bacteria on their own. The beauty of the design lay in the combination of three serially repressing genes and the periodicity of this engineered clock reflected the half-life of the protein encoded by each gene as well as the time it took for the protein to act on the subsequent member of the gene loop.
Both papers described the introduction of plasmids encoding for multiple genes into bacteria but this itself was not novel. In fact, this has been a routine practice since the 1970s for many molecular biology laboratories. The panache of the work lay in the construction of functional biological modules consisting of multiple genes which interacted with each other in a controlled and predictable manner. Since the publication of these two articles, hundreds of scientific papers have been published which describe even more intricate engineered gene circuits. These newer studies take advantage of the large number of molecular tools that have become available to query the genome as well as newer DNA plasmids which encode for novel biosensors and regulators.
Synthetic biology is an area of science devoted to engineering novel biological circuits, devices, systems, genomes or even whole organisms. This rather broad description of what "synthetic biology" encompasses reflects the multidisciplinary nature of this field which integrates ideas derived from biology, engineering, chemistry and mathematical modeling as well as a vast arsenal of experimental tools developed in each of these disciplines. Specific examples of "synthetic biology" include the engineering of microbial organisms that are able to mass produce fuels or other valuable raw materials, synthesizing large chunks of DNA to replace whole chromosomes or even the complete genome in certain cells, assembling synthetic cells or introducing groups of genes into cells so that these genes can form functional circuits by interacting with each other. Synthesis in the context of synthetic biology can signify the engineering of artificial genes or biological systems that do not exist in nature (i.e. synthetic = artificial or unnatural), but synthesis can also stand for integration and composition, a meaning which is closer to the Greek origin of the word. It is this latter aspect of synthetic biology which makes it an attractive area for basic scientists who are trying to understand the complexity of biological organisms. Instead of the traditional molecular biology focus on studying just one single gene and its function, synthetic biology is engineering biological composites that consist of multiple genes and regulatory elements of each gene. This enables scientists to interrogate the interactions of these genes, their regulatory elements and the proteins encoded by the genes with each other. Synthesis serves as a path to analysis.
One goal of synthetic biologists is to create complex circuits in cells to facilitate biocomputing, building biological computers that are as powerful or even more powerful that traditional computers. While such gene circuits and cells that have been engineered have some degree of memory and computing power, they are no match for the comparatively gigantic computing power of even small digital computers. Nevertheless, we have to keep in mind that the field is very young and advances are progressing at a rapid pace.
One of the major recent advances in synthetic biology occurred in 2013 when an MIT research team led by Rahul Sarpeshkar and Timothy Lu at MIT created analog computing circuits in cells. Most synthetic biology groups that engineer gene circuits in cells to create biological computers have taken their cues from contemporary computer technology. Nearly all of the computers we use are digital computers, which process data using discrete values such as 0's and 1's. Analog data processing on the other hand uses a continuous range of values instead of 0's and 1's. Digital computers have supplanted analog computing in nearly all areas of life because they are easy to program, highly efficient and process analog signals by converting them into digital data. Nature, on the other hand, processes data and information using both analog and digital approaches. Some biological states are indeed discrete, such as heart cells which are electrically depolarized and then repolarized in periodical intervals in order to keep the heart beating. Such discrete states of cells (polarized / depolarized) can be modeled using the ON and OFF states in the biological circuit described earlier. However, many biological processes, such as inflammation, occur on a continuous scale. Cells do not just exist in uninflamed and inflamed states; instead there is a continuum of inflammation from minimal inflammatory activation of cells to massive inflammation. Environmental signals that are critical for cell behavior such as temperature, tension or shear stress occur on a continuous scale and there is little evidence to indicate that cells convert these analog signals into digital data.
Most of the attempts to create synthetic gene circuits and study information processing in cells have been based on a digital computing paradigm. Sarpeshkar and Lu instead wondered whether one could construct analog computation circuits and take advantage of the analog information processing systems that may be intrinsic to cells. The researchers created an analog synthetic gene circuit using only three proteins that regulate gene expression and the fluorescent protein mCherry as a read-out. This synthetic circuit was able to perform additions or ratiometric calculations in which the cumulative fluorescence of the mCherry was either the sum or the ratio of selected chemical input concentrations. Constructing a digital circuit with similar computational power would have required a much larger number of components.
The design of analog gene circuits represents a major turning point in synthetic biology and will likely spark a wave of new research which combines analog and digital computing when trying to engineer biological computers. In our day-to-day lives, analog computers have become more-or-less obsolete. However, the recent call for unconventional computing research by the US Defense Advanced Research Projects Agency (DARPA) is seen by some as one indicator of a possible paradigm shift towards re-examining the value of analog computing. If other synthetic biology groups can replicate the work of Sarpeshkar and Lu and construct even more powerful analog or analog-digital hybrid circuits, then the renaissance of analog computing could be driven by biology. It is difficult to make any predictions regarding the construction of biological computing machines which rival or surpass the computing power of contemporary digital computers. What we can say is that synthetic biology is becoming one of the most exciting areas of research that will provide amazing insights into the complexity of biological systems and may provide a path to revolutionize biotechnology.Daniel R, Rubens JR, Sarpeshkar R, & Lu TK (2013). Synthetic analog computation in living cells. Nature, 497 (7451), 619-23 PMID: 23676681
Monday, December 30, 2013
My New Year's Resolution: Getting to Know my Genome Sequence
by Carol A. Westbrook
On November 12, 2013, I placed a package containing a small sample of my blood into a UPS drop box. It is a fait accompli. I'm going to get my Genome Sequenced! I was thrilled!
No doubt you are wondering why I wanted to do this. The short answer -- because I can.
When I started my research career in the early 1980's, scientists such as myself understood how valuable the human DNA sequence would be to medical research, but it seemed an unattainable dream. Yet in 1988 the Human Genome Program was begun, proposing obtain this sequence within 20 years. I was hooked. I was active in the Program, on advisory panels, on grant reviews, and on my own research, mapping cancer genes. Obtaining DNA sequence was painstakingly difficult, while interpreting and searching the resulting sequence was almost beyond the capability of the computers of the time. Nonetheless, in 2003, a composite DNA sequence of the human genome was completed, 5 years ahead of schedule. Shortly thereafter, two of the leading genome researchers, J. Craig Venter and James Watson, volunteered to have their own genome sequenced in their research labs, and Steve Jobs purportedly had his sequenced for $100,000.
I never imagined that in 2013, only 10 years later, sequencing and computational technology would improve so much so that an individual's genome could be sequenced quickly and (relatively) affordably. I could have my own genome sequenced! For a genomic scientist like myself, this was the equivalent of going to the moon.
I found a company, Illumina, which offered whole genome sequencing for medical diagnosis. I wrote to Illumina, "I have had over 25 years of experience in the Human Genome Program, and at this time would like to truly explore what I contributed to, these many years. I think the time is right to do this. I am able to interpret the results based on my previous experience in this field, and am comfortable with any results that might be found. So is my family. Realistically, I am 63 years old and healthy, so my risk of discovering a dangerous genetic condition is minimal."
Illumina invited me to participate in their "Understand Your Genome Program," where I and about 50 others "sequencees" would have our DNA sequenced and attend a daylong seminar on the interpretation and significance of our individual results. We would receive our personal sequence on an iPad at the seminar. This program is a combination of education, publicity, and "getting the message out," and the sequencing is offered at half the commercial cost--and within my budget. So I submitted my credit card info and sent in my sample on November 12, 2013, 10 years and 7 months after the completion of the first human genome was announced.
I hadn't really thought much about the implications of knowing my personal genome sequence until that morning, when filled out the required paperwork to accompany with my sample. A doctor's signature was required to order the test -- no problem, I'm an MD -- and there was an optional signature for genetic counseling -- I signed that, too, since I have clinical experience in that area. Next, my personal medical history: a checklist of common conditions that might have a genetic link (e.g. asthma, blood clots), and whether or not I was adopted. That was easy, I'm pretty healthy and I'm not adopted.
The family history took longer because my father and mother came from large families, 12 and 5 siblings, respectively, and I have 3 sibs of my own. Heart disease, high cholesterol and strokes run rampant in my dad's family. But I never really noted that there was cancer on my mother's side and I, too, might carry a predisposition, too. And Mom did develop Parkinson's disease, and eventually non-Alzheimer's dementia. Hmm. That was something to think about. Did I want to know?
Finally, the informed consent. I signed a statement agreeing to go ahead with the test, and acknowledging that I understand the implications and/or will discuss them with my doctor. I agreed to let them keep my leftover specimen for research. I was also asked to indicate whether there were any categories of genetic diseases that I might find that I did not want to know about, such as those that can't be treated, or progressive neurologic conditions like Huntington's disease, or genes that put me at risk for cancer. I decided that I wanted everything revealed. I signed the forms and sent in the sample.
The next step was to talk to my children and siblings (2 brothers and a sister) about my pending genome sequence, reminding that that they each have a 50-50 chance of carrying any gene that I have. I offered to let them to know my results, or to opt out for some or all of the genes, as I was asked to do. Everyone was okay with this because they knew I was healthy, I was past the age for many genetic conditions, and I didn't have cancer. My son jokingly said "sure, but don't tell me if I have Huntington's disease."
Although I'm certain I don't have Huntington’s disease, I might still carry gene that puts me at risk of a disease, such as cancer or diabetes, but never develop the disease. Geneticists call this "low penetrance." My children my get the gene and the disease. I might also carry a single gene for a recessive condition, such as hemochromatosis, which causes disease only if you inherit two abnormal genes. Who knows what is in the half of my parent's genome that I didn't inherit but my siblings may have? Or in my children’s' father's DNA? Finally, there are X-linked genes, in which women carry the gene and pass it on to their daughters, but only sons and grandsons develop the disease. Some examples are color blindness and hemophilia. Clearly there are results of my genome sequence that may impact my relatives. I decided to bring my daughter along with me to the March reveal, and to bring her iPad along.
At this time, my DNA is going through the sequencer and the results are being uploaded to the iPad. I am curious to know what I will find. There may be some data on ethnic origins, which may be helpful in understanding my heritage, as my father's father was illegitimately conceived shortly before his mother emigrated from Poland. Was my great grandfather Polish, or will we find genes from some far-away place? Of course, it will also be fun to know what percent of my genome is Neanderthal, too. And from the health perspective, I will be screened for the "known" genetic conditions, such as those revealed by the less inexpensive, more-limited chip-based DNA tests, such as 23andMe. This is valuable information, particularly as it might identify risk factors (cardiac, cancer, diabetes, etc.) or unexpected interactions with medications.
Learning about the known genes is useful but, in most cases, it is not going to make a major impact on a person's health. And, considering the expense, it is certainly not going to justify implementing whole genome sequencing as a standard part of our medical care -- at least not for now. But most of the current discussion on the benefits of genome sequencing has been one-dimensional, focusing on the significance of identifying these known genes and risk factors. Yet what excites me about this project is not the known genes, but the incredible potential of a person's DNA sequence having a major impact on his health and longevity in the future, in ways that we cannot even predict.
Consider, for example, common conditions that clearly have a genetic component but can't be pinned to a single gene. These include asthma, rheumatoid arthritis, lupus, cancer, hypertension, diabetes, obesity, metabolic syndrome, kidney failure, anemia, cancer, depression, schizophrenia, obesity, heart attacks, osteoarthritis and many others. In fact, conditions like these probably cover the majority of all doctors' visits (excluding infection and accidents). These are the "unknown unknowns," where combinations of genes and environmental factors come to play. Perhaps we will be able to use our genome sequence to prevent these diseases by targeting the mutation, or lessen the severity of the condition, or modify outside factors that impact them.
-What if you knew that would get diabetes if you were overweight, but you also knew that you could prevent this obesity by modifying a gene in your liver?
-What if you knew that your daughter had the potential to be a math genius? Would you help her develop her potential?
-What if your doctor could treat your hypertension with an individualized combination of drugs that had no side effects for you?
-What if you learned you have a risk of schizophrenia, but could prevent it by a treatment designed target the DNA sequence and stop its progression?
-What if you knew what biochemical subtype of depression you had, so you could treat it with the correct drug?
Impossible dreams? Sure, but so was obtaining the complete human genome sequence 1988. There is no question that genome research is moving so rapidly that we don't even have a vision of where it will be in 10 years. But I'm confident that the medical implications will strengthen as research continues and more complete genomes are compiled. I am pleased to be an early contributor. I will have my iPad at the ready when some of these new discoveries are made.
Monday, November 25, 2013
Through A Printer Darkly
by James McGirk
James McGirk works as a literary journalist and is a contributing analyst to an online think tank. The following is an imagined itinerary for a tourist vacation twenty years in the future.
Seven days in the PRINTERZONE
June 20, 2033-June 28, 2033
A quick suborbital hop to Iceland courtesy of Virgin Galactic and then it’s all aboard the ScholarShip, a luxurious three-mast schooner powered by that most ecologically palatable of sources: the wind.
Weather-permitting you and twenty of your fellow alumni will set sail for the Printerzone. (The North and Norwegian Seas can be temperamental: in the event of heavy weather we revert to backup biodiesel power.) Our destination has been recognized by UNESCO as a World Heritage Site: it is both a glimpse at what our future might become should government regulation of printers come to an end, and a fantasy of life free from credit and ubiquitous surveillance. Together we’ll spend a week immersed in this unique community, on board an oilrig in international waters, using three-dimensional additive printing to meet our every need.
Joining us on this adventure will be Prof. Orianna Braum, an associate professor of Maker Culture at Stanford University; Alan Reasor, a forty-year veteran of the additive printing industry; and a young man who prefers to refer to himself by displaying a small silver plastic snowflake in his palm.
ITINERARY - DAY ONE
A colorful day spent traversing the Norwegian and North Seas… sublime marine grays and blues stirred by the bracing sea breeze. Keep your eyes peeled for pods of chirping Minke whales! Many are 100 percent natural.
Breakfast and lunch will be served onboard The ScholarShip by our chef Matthias Spork. Selections include: printed cereals and pastas, catch-of-the-day and a refreshing sorbet spatter-printed by his wife, renowned pastry chef Rebecca Spork.
Prof. Braum and Mr. Reasor will debate: Has Three-Dimensional Printing failed its Promise? Reasor will argue that in most instances economies of scale and the cost of raw materials make conventional manufacturing a more cost-effective solution than 3D printing. Prof. Braum will counter, describing industries that have been radically reshaped by printing—prosthetics and dentistry, bespoke suiting and fashion, at-home robotics and auto-repair—and suggest instead that government safety regulation and restrictive intellectual property licenses have done more to stifle innovation than costs. There will be time for questions afterwards. And then a brief demonstration of piezoelectric substrates: printed materials that respond to the human touch.
Following a hearty and delicious dinner prepared by the Sporks, we invite you for hot toddy and outdoor stargazing with our First Mate. The Arctic winds can be fierce at night, so you have the option of lighting the hearth in your cabin, and viewing a very special Skype broadcast—The Pink Printer’s Naughty Apprentice—which outlines in a most whimsical and titillating way some of the more adult uses of the three-dimensional printer.
(Please note that cabins containing occupants below the age of consent in their country of residence will not receive this broadcast.)
Drop Anchor in the Printerzone
After a hot breakfast ladled out by the Sporks, join your shipmates on deck for an approach unlike anywhere else on earth: a faint glimmer on the horizon gathers in size and sprouts shapes and colors, until the magnificent muddle that is the Printerzone fills our entire field of vision. Crumpled wrapping paper on stilts, a wag once said. Squint at this glorious mass, and beneath the colorful sprays of plastic and the pieces of flotsam and jetsam the residents have creatively incorporated into their homes, you just might make out the original concrete and steel beneath.
Your daily allowance of printer substrate will be issued to you in bulk so that you may trade it for trinkets. A rope ladder will be lowered from above. One at a time you will be hoisted to the Zone. There, our guide, the man who identifies himself with the silver snowflake (henceforth referred to as [*]) shall greet us. He is an interesting specimen. Ask of him what you will. The tour begins at The Workshop, a vast, enclosed “maker space” where P’Zoners (as they call themselves) exchange goods, plans for new designs and information. Barter your substrate for unique souvenirs. Take a class in creation. Then enjoy a sandwich lunch carefully selected by the Sporks. Food may also be bartered with the natives.
After lunch you may explore the Zone at your leisure or enjoy another spirited debate between Reasor and Braum. Printerzone: Model City or Goofy Aberration? Dinner shall be served in the Workshop, which at night transforms into The Wild Rumpus. Guests in peak physical condition may want to join the carousing. (N.B. Beware of custom-printed entheogens and other libations, which, while they may be legal in the Printerzone, are not necessarily safe.)
Fresh croissants and a mug of coffee are the perfect way to begin a crisp Printetrzone morning! Daring types may wish to join [*] and don a protective suit printed from the city’s custom printers, and sink beneath the waves for a romp on the seafloor and a look at how the city has evolved below the waterline. Printerzone’s silver suits are said to work as well in orbit as they do submerged beneath the waves. You may examine copies of a Vogue pictorial featuring the suits.
For those who prefer a more relaxed pace in the morning, there will be a bicycle tour of the Zone’s famous hydroponic orchid nursery, its orphanage and its medical clinics (notable, for, among other things, performing the first artificial face transplant). There will also be a chance to examine the city’s recycling system up close as it transforms unwanted printer output and even sewage and brine into the raw materials for printing. No stinky smells we promise!
(All printed foods served aboard the ScholarShip are guaranteed to be free from precursor materials that were made from human waste or potential allergens.)
For lunch, if you’re ready for it, be prepared to break some taboos. Guided by [*], the Sporks, rabbis, halal butchers, vegan chefs, and a number of other experts, you will be given a unique opportunity to eat—among otherwise offensive offerings—a perfect facsimile of human flesh, pork, dolphin steak, non-toxic fugu flesh, endangered sea turtle, and even taste the world’s most potent toxins in perfect moral comfort and safety. Less adventurous offerings will also be available for the squeamish.
During lunch, Braum and Reasor will sound off on the subject of: Whether Full Employment is Possible in a post-3DP World. Braum says printing in three dimensions will kill off the middlemen who camp out in many employment categories (the warehouse managers, the marketing men…); Reasor agrees, but thinks the unfettered labor will be absorbed by innovative new industries. There will be time for questions. Coffee too.
After lunch there will be a demonstration of one of the most potent technologies to emerge from three-dimensional printing: the cheap invisibility cloak. Then you will be joined by some of the city’s most outrageous tailors, haberdashers, wig makers, and costume outfitters. Design a more colorful, eccentric version of yourself and then top off your creation with a freshly printed invisibility cloak, so that you might attend the night’s festivities in absolute comfort. You need only reveal yourself to those you want to. Buffet dinner. Brandy against the chill.
(N.B. Printerzone security forces are equipped with night-vision goggles, so rest assured that you will be safe, but don’t get any antisocial ideas. There are some rules to abide by!)
Pondering the Printerzone
On our fourth day, after a healthy, all-natural breakfast lovingly prepared by the Sporks on the ScholarShip, we delve into the Printerzone’s more pensive side. [*] will lead us on a tour of the Million Memorials, the serene necropolis where the city’s mourners print chalky likenesses of friends and family they’ve lost, and missing objects and abstractions too. A quiet, haunting place. After a pleasing serenade by the P’Zone wailers, we picnic among the monuments, and hear [*]’s own story of loss—his young bride who slipped over the railing during a photo session and drowned in the ocean— and gaze at the spun plastic residue of a brief but happy relationship and afterwards, stroll back to The Workshop for a chance to barter for more amusements.
The subject of the day’s lecture (delivered, of course by Braum and Reasor) will be: Three Dimensional Printing in the Developing World. Printing won’t be the panacea we think it will because the developing world lacks the infrastructure to sustain itself; but surely the availability of items that would otherwise have been unavailable is valuable—but what about the cottage industries that would be eradicated by printing, wouldn’t that snuff out any printing-related development? Drink during the lecture if you like. Gaze longingly at potential mates if you wish to. This is a pleasure cruise.
After a brief question and answer session, a fittingly austere supper will be served, and [*] will introduce us to a non-profit initiative sponsored by the Printerzone: a crisis response team that will race to trouble spots and, without the needless hassle of lines of communication and supply, be able to provide surgical equipment, medicines and shelter at a fraction of the cost… cost? Yes, even this barter-driven economy is soliciting funds. Contribute what you will. The city’s orphans hand out orchids.
Snack before the Wild Rumpus. Serenade. Custom sex surrogates printed for an additional fee. (Please: No printing of lecturers, crewmembers, fellow travelers without their expressed permission, no skin prints using DNA within a 15 percent match of your own.)
At home in the Printerzone
Many of travelers wake on their fifth day beside a grim memory, manifest in the form of slightly abused piezoelectric plastic. You may find it cathartic to batter your unwanted surrogate to pieces, or, if you are the showy sort—enter the surrogate into the ring for gladiatorial combat. The festivities begin with a squabble between Braum and Reasor’s creations (one wonders at the tension between them), followed by a battle royal, and a moving speech by [*] about whether or not a surrogate has a soul. Each participant will be allowed to download a copy of Do Androids Dream of Electric Sheep for later review.
By now you’ve spent nearly a week looking up at the frills wrapped around the upper decks of the rig. Perhaps you’ve wondered what the lives of the residents are like beyond the Wild Rumpus or the Workshop floor. Today you’ll enjoy an intimate glance at their living quarters.
Some might find this disturbing. There are children here, you might say, how could one live like this? But they’re hardly cut off; well, maybe they are cut off from nature and history and dry land but not the ‘net. See the data goggles they wear? The tykes and pubers who strut about the Zone have come to see the boundary between what is virtual and what is not as a thing much more permeable than you or I.
Here the Internet is inside out. People print virtual things. Shudder at the home robots with their suction cup attachments. Are they vacuum cleaners or sexual abominations or both? Much of the home décor won’t make sense unless you’re jacked into the ’net. Too prone to data dropsy to peer through a lens? Ask yourself why this trip appealed to you in this first place, but fear not—there are gentle entheogens that replicate the experience of data being blazed onto your eyeballs.
Nighttime. Rumpus again. Dance and flail until you feel yourself dissolve into the communal flesh. The Sporks have taken the day off. Truth be told they’re disgusted with three-dimensional printing and what it means for their profession. Can you blame them? Who cares, you aren’t hungry. From perched up high, the Zone looks terraced and circular like a medieval etching of The Inferno. The Rumpus looks like the writhing of the damned. You think you see Braum and Reasor embrace. [*] sits beside you and tells you his given name was Virgil. Has he been drugging you?
Beyond the Printerzone
Someone wakes you up by firing a pistol in the air. That’s right, there are a lot of weapons here. This is a polite society. Ugh, the sunlight streaming into your eyes is sheer agony. Your neurons are crying out. Caffeine! Dopamine! Serotonin! You wobble out on deck. The Sporks are back. Thank God the Sporks are back. They pour you a mug of coffee. They cut you a grapefruit. Crackling bacon, the smell of bread baking.
[*] won’t look you in the eye, the sweaty creep.
Above you the colorful plastic printed houses look chintzy in the light. They hoist you up. Peek below. The ScholarShip is an oasis of sanity and earthtones. Everything else is Technicolor Burp. Can you really face another day of this? The medic gives you something for your throbbing head. A party assembles. Wrapped sandwiches for lunch and shot-glasses of Astronaut Ice Cream. A hardhat. That silver protective garb you’ll have to peel off afterwards. The place stinks of kerosene (that’s jet fuel someone will say.) There are men from NASA, and men from the Air Force, and men with helmets that look like they’re made entirely from mirrorshades. Cyclopses. You want to leave. There’s a faint but unmistakable rumble.
Reasor and Braum waddle to the front of your party. Another debate: Space Exploration is Three-Dimensional Printing’s Killer App. This time they both agree. Reasor thinks the way to reach for the stars is to print a massive cable and haul ourselves up. Braum says that’s great, but what’s better is that you can go anywhere in space and print anything you could possibly need. You can beam plans to the spaceship, plans for things that weren’t invented when the ship took off. Applause. Time for questions. Cups of coffee. Cookies.
Wonder what if printers were used to print infinite printers?
Clutch your mug. Look around. The top level is cold and metallic. Limp suits hang waiting, rows of silver helmets that look like Belgian Glass globes wink in the setting sun. Rockets: fins, nose caps, nozzles, streamlined bellies, lie, being assembled from spools of plastic. Dinner is splendid and sober. You remember little of it. There were candles. An ant walked across the table.
Tonight there is no Wild Rumpus. You sleep on the rig, beneath the stars but protected by an infinitesimal layer of plastic. A storm blows in. Electricity rips the Arctic sky. Rain pounds plastic but never touches you. You are woken by a helmeted Cyclops: “Some visitors decide never to leave,” he says, extending a gloved hand. It’s silver. “We’ll nourish you.” Behind the smooth surface you can just make out the blurry face of [*]
Wake to the smell of Sporks’ cooking. A printed snowflake has been placed beside you. Visitors may opt to extend their stay. Or leave and never, ever come back.
Monday, November 18, 2013
Homo Erectus, or I Married a Ham
by Carol A. Westbrook
My husband loves big erections. Don't get me wrong, I'm not speaking here about Viagra, I'm talking about tall towers made of metal, long wires strung high in the sky, and tall antennas protruding from car roofs. He loves anything that broadcasts or receives those elusive radio waves, the bigger the better. That is because he is a ham, also known as an amateur radio enthusiast, and all hams love antennas.
Amateur radio has been around since the early 1900's, shortly after Marconi's first transatlantic wireless transmission in 1901. Initially, radio amateurs communicated using Morse code, as did commercial radiotelegraphy, but voice transmission quickly gained in popularity. In order to broadcast on the ham radio frequencies, hams must obtain an amateur radio license from the FCC, and a unique call sign, their ham "name." Proficiency in Morse code was required in order to obtain an amateur radio license, but this requirement was finally dropped in 2003, which opened up the field to many more interested radio amateurs, my husband being one of them. As a result, the hobby is becoming popular again. There are local clubs to join, as well as national get-togethers called "hamfests" where there are lectures, demonstrations, equipment swap-meets, and licensing exams.
What do hams do? They communicate by radio. They use everything from a battery-powered hand-held transmitter to a massive collection of specialized radio equipment located in a corner of their home or garage, which they call their "ham shack." (See picture of my husband's ham shack, above, in his library). They talk to other ham radio operators, and participate in conversations that may be local or span the globe, depending on the radio wavelength, the power of their transmitter, and their antenna. And they erect large antennas, perhaps on an outside tower or the roof of their home.
Like Marconi, hams learn early on that it's relatively easy to send out a radio signal, but the distance it travels depends as much on the size and configuration of the antenna as it does on the signal strength. There is an art to constructing an antenna, and hams spend a great deal of effort on it. That is why hams are fascinated by antennas. They are the quintessential "homo erectus."
My husband's fascination was fueled by his boyhood days. In the 1950's he felt isolated from the outside world because his family's radio and TV could only receive a few stations, living as they did in an a valley surrounded by the Pocono Mountains. He learned that he could receive more stations by stringing long wires throughout the house, or on the roof -- creating his own makeshift antennas. This led to an engineering degree, an interest in telecommunications, and a ham radio license.
Our houses are festooned with antennas. We have long wires strung from roof to garage, a small tower on the hillside, four large parabolic dishes, from 6 to 11 feet in diameter, that receive signals from transmitting satellites... but that's another story. We even have a stealth antenna in our garden which, to the casual observer, appears to be just another garden ornament, nestled among the roses. (See picture) Unlike other "ham widows" I don't mind these antennas -- they are certainly conversation pieces. I do not have a ham license--I didn't past the exam, but then again I didn't study for it. But I often go along with my husband to hamfests, including the famous Dayton Hamvention, which takes place every May.
What is so appealing about ham radio? Why spend your time and money to buy archaic equipment and erect antennas and mess up your house -- when you can just call on your cell or Skype your friend? The answer is simple -- because you can. As a hobbyist, you cannot easily make a micro chip, or build a cell phone, or create your own internet, but you can assemble your own equipment and broadcast your own voice, around the world. Just like Marconi! What a high! What a sense of empowerment! And ham radio is a great hobby for youngsters who want to learn about the electrical and mechanical world, and enjoy the challenge of "getting out of the valley" using their own ingenuity and design. If you would like to learn more, contact the national association for amateur radio, the American Radio Relay League, to learn how to get involved, or visit their headquarters and museum at 225 Main Street Newington, CT 06111-1494 USA. You might get hooked, too.