Monday, July 20, 2015
"We are at home with situations of legal ambiguity.
And we create flexibility, in situations where it is required."
Consider a few hastily conceived scenarios from the near future. An android charged with performing elder care must deal with an uncooperative patient. A driverless car carrying passengers must decide between suddenly stopping, and causing a pile-up behind it. A robot responding to a collapsed building must choose between two people to save. The question that unifies these scenarios is not just about how to make the correct decision, but more fundamentally, how to treat the entities involved. Is it possible for a machine to be treated as an ethical subject – and, by extension, that an artifical entity may possess "robot rights"?
Of course, "robot rights" is a crude phrase that shoots us straight into a brambly thicket of anthropomorphisms; let's not quite go there yet. Perhaps it's more accurate to ask if a machine – something that people have designed, manufactured and deployed into the world – can have some sort of moral or ethical standing, whether as an agent or as a recipient of some action. What's really at stake here is the contention that a machine can act sufficiently independently in the world that it can be held responsible for its actions and, conversely, if a machine has any sort of standing such that, if it were harmed in any way, this standing would serve to protect its ongoing place and function in society.
You could, of course, dismiss all this as a bunch of nonsense: that machines are made by us exclusively for our use, and anything a robot or computer or AI does or does not do is the responsibility of its human owners. You don't sue the scalpel, rather you sue the surgeon. You don't take a database to court, but the corporation that built it – and in any case you are probably not concerned with the database itself, but with the consequence of how it was used, or maintained, or what have you. As far as the technology goes, if it's behaving badly you shut it off, wipe the drive, or throw it in the garbage, and that's the end of the story.
This is not an unreasonable point of departure, and is rooted in what's known as the instrumentalist view of technology. For an instrumentalist, technology is still only an extension of ourselves and does not possess any autonomy. But how do you control for the sort of complexity for which we are now designing our machines? Our instrumentalist proclivities whisper to us that there must be an elegant way of doing so. So let's begin with a first attempt to do so: Isaac Asimov's Three Laws of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Some time later, Asimov added a fourth, which was intended to precede all the others, so it's really the ‘Zeroth' Law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Laws, which made their first appearance in a 1942 story that is, fittingly enough, set in 2015, are what is known as a deontology: an ethical system expressed as an axiomatic system. Basically, deontology provides the ethical ground for all further belief and action: the Ten Commandments are a classic example. But the difficulties with deontology become apparent when one examines the assumptions inherent in each axiom. For example, the First Commandment states, "Thou shalt have no other gods before me". Clearly, Yahweh is not saying that there are no other gods, but rather that any other gods must take a back seat to him, at least as far as the Israelites are concerned. The corollary is that non-Israelites can have whatever gods they like. Nevertheless, most adherents to Judeo-Christian theology would be loathe to admit the possibilities of polytheism. It takes a lot of effort to keep all those other gods at bay, especially if you're not an Israelite – it's much easier if there is only one. But you can't make that claim without fundamentally reinterpreting that crucial first axiom.
Asimov's axioms can be similarly poked and prodded. Most obviously, we have the presumption of perfect knowledge. How would a robot (or AI or whatever) know if an action was harmful or not? A human might scheme to split actions that are by themselves harmless across several artificial entities, which are subsequently combined to produce harmful consequences. Sometimes knowledge is impossible for both humans and robots: if we look at the case of a stock-trading AI, there is uncertainty whether a stock trade is harmful to another human being or not. If the AI makes a profitable trade, does the other side lose money, and if so, does this constitute harm? How can the machine know if the entity on the other side is in fact losing money? Would it matter if that other entity were another machine and not a human? But don't machines ultimately represent humans in any case?
Better yet, consider a real life example:
A commercial toy robot called Nao was programmed to remind people to take medicine.
"On the face of it, this sounds simple," says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. "But even in this kind of limited task, there are nontrivial ethics questions involved." For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
In this case, the Hippocratic ‘do no harm' has to be balanced against a more utilitarian ‘do some good'. Assuming it could, does the robot force the patient to take the medicine? Wouldn't that constitute potential harm (ie, the possibility that the robot hurts the patient in the act)? Would that harm be greater than not taking the medicine, just this once? What about tomorrow? If we are designing machines to interact with us in such profound and nuanced ways, those machines are already ethical subjects. Our recognition of them as such is already playing catch-up with the facts on the ground.
As implied with the stock trading example, another deontological shortcoming is in the definitions themselves: what's a robot, and what's a human? As robots become more human-like, and humans become more engineered, the line will become blurry. And in many cases, a robot will have to make a snap judgment. What's binary for "quo vadis", and what do you do with a lying human? Because humans lie for the strangest reasons.
Finally, the kind of world that Asimov's laws presupposes is one where robots run around among humans. It's a very specific sort of embodiment. In fact, it is a sort of Slavery 2.0, where robots clearly function for the benefit and in the service of humanity. The Laws are meant to facilitate a very material cohabitation, whereas the kind of broadly distributed, virtually placeless machine intelligence that we are currently developing by leveraging the Internet is much more slippery, and resembles the AI of Spike Jonze's ‘Her'. How do you tell things apart in such a dematerialized world?
The final nail in Asimov's deontological coffin is the assumption of ‘hard-wiring'. That is, Asimov claims that the Laws would be a non-negotiable part of the basic architecture of all robots. But it is wiser to prepare for the exact opposite: the idea that any machine of sufficient intelligence will be able to reprogram itself. The reasons why are pretty irrelevant – it doesn't have to be some variant of SkyNet suddenly deciding to destroy humanity. It may just sit there and not do anything. It may disappear, as the AIs did in ‘Her'. Or, as in William Gibson's Neuromancer, it may just want to become more of itself, and decide what to do with that later on. Gibson never really tells us why the two AIs – that function as the true protagonists of the novel – even wanted to do what they did.
This last thought indicates a fundamental marker in the machine ethics debate. A real difference is developing itself here, and that is the notion of inscrutability. In order for the stance of instrumentality to hold up, you need a fairly straight line of causality. I saw this guy on the beach, I pulled the trigger, and now the guy is dead. It may be perplexing, I may not be sure why I pulled the trigger at that moment, but the chain of events is clear, and there is a system in place to handle it, however problematic. On the other hand, how or why a machine comes to a conclusion or engages in a course of action may be beyond our scope to determine. I know this sounds a bit odd, since after all we built the things. But a record of a machine's internal decisionmaking would have to be a deliberate part of its architecture, and this is expensive and perhaps not commensurate with the agenda of its designers: for example, Diebold made both ATMs and voting machines. Only the former provided receipts, making it fairly easy to theoretically steal an election.
If Congress is willing to condone digitally supervised elections without paper trails, imagine how far away we are from the possibility of regulating the Wild West of machine intelligence. And in fact AIs are being designed to produce results without any regard for how they get to a particular conclusion. One such deliberately opaque AI is Rita, mentioned in a previous essay. Rita's remit is to deliver state-of-the-art video compression technology, but how it arrives at its conclusions is immaterial to the fact that it manages to get there. In the comments to that piece, a friend added that "it is a regular occurrence here at Google where we try to figure out what our machine learning systems are doing and why. We provide them input and study the outputs, but the internals are now an inscrutable black box. Hard to tell if that's a sign of the future or an intermediate point along the way."
Nevertheless, we can try to hold on to the instrumentalist posture and maintain that a machine's black box nature still does not merit the treatment accorded to an ethical subject; that it is still the results or consequences that count, and that the owners of the machine retain ultimate responsibility for it, whether or not they understand it. Well, who are the owners, then?
Of course, ethics truly manifests itself in society via the law. And the law is a generally reactive entity. In the Anglo-American case law tradition, laws, codes and statutes are passed or modified (and less often, repealed) only after bad things happen, and usually only in response to those specific bad things. More importantly for the present discussion, recent history shows that the law (or to be more precise, the people who draft, pass and enforce it) has not been nearly as eager to punish the actions of collectives and institutions as it has been to pursue individuals. Exhibit A in this regard is the number of banks found guilty of vast criminality following the 2008 financial crisis and, by corollary, the number bankers thrown in jail for same. Part of the reason for this is the way that the law already treats non-human entities. I am reminded of Mitt Romney on the Presidential campaign trail a few years ago, benignly musing that "corporations are people, my friend".
Corporate personhood is a complex topic but at its most essential it is a great way to offload risk. Sometimes this makes sense – entrepreneurs can try new ideas and go bankrupt but not lose their homes and possessions. Other times, as with the Citizens United decision, the results can be grotesque and impactful in equal measure. But we ought to look to the legal history of corporate personhood as a possible test case for how machines may become ethical subjects in the eyes of the law. Not only that, but corporations will likely be the owners of these ethical subjects – from a legal point of view, they will look to craft the legal representation of machines as much to their advantage as possible. To not be too cynical about it, I would imagine this would involve minimal liability and maximum profit. This is something I have not yet seen discussed in machine ethics circles, where the concern seems to be more about the instantiation of ethics within the machines themselves, or in highly localized human-machine interactions. Nevertheless, the transformation of the ethical machine-subject into the legislated machine-subject – put differently, the machines as subjects of a legislative gaze – will be of incredibly far-reaching consequence. It will all be in the fine print, and I daresay deliberately difficult to parse. When that day comes, I will be sure to hire an AI to help me make sense of it all.
How Viruses Feign Death to Survive and Thrive
by Jalees Rehman
Billions of cells die each day in the human body in a process called "apoptosis" or "programmed cell death". When cells encounter stress such as inflammation, toxins or pollutants, they initiate an internal repair program which gets rid of the damaged proteins and DNA molecules. But if the damage exceeds their capacity for repair then cells are forced to activate the apoptosis program. Apoptotic cells do not suddenly die and vanish, instead they execute a well-coordinated series of molecular and cellular signals which result in a gradual disintegration of the cell over a period of several hours.
What happens to the cellular debris that is generated when a cell dies via apoptosis? It consists of fragmented cellular compartments, proteins, fat molecules that are released from the cellular corpse. This "trash" could cause even more damage to neighboring cells because it exposes them to molecules that normally reside inside a cell and could trigger harmful reactions on the outside. Other cells therefore have to clean up the mess as soon as possible. Macrophages are cells which act as professional garbage collectors and patrol our tissues, on the look-out for dead cells and cellular debris. The remains of the apoptotic cell act as an "Eat me!" signal to which macrophages respond by engulfing and gobbling up the debris ("phagocytosis") before it can cause any further harm. Macrophages aren't always around to clean up the debris which is why other cells such as fibroblasts or epithelial cells can act as non-professional phagocytes and also ingest the dead cell's remains. Nobody likes to be surrounded by trash.
Clearance of apoptotic cells and their remains is thus crucial to maintain the health and function of a tissue. Conversely, if phagocytosis is inhibited or prevented, then the lingering debris can activate inflammatory signals and cause disease. Multiple autoimmune diseases, lung diseases and even neurologic diseases such as Alzheimer's disease are associated with reduced clearance. The cause and effect relationship is not always clear because these diseases can promote cell death. Are the diseases just killing so many cells that the phagocytosis capacity is overwhelmed, does the debris actually promote the diseased state, or is it a bit of both, resulting in a vicious cycle of apoptotic debris resulting in more cell death and more trash buildup? Researchers are currently investigating whether specifically tweaking phagocytosis could be used as a novel way to treat diseases with impaired clearance of debris.
During the past decade, multiple groups of researchers have come across a fascinating phenomenon by which viruses hijack the phagocytosis process in order to thrive. One of the "Eat Me!" signals for phagocytes is that debris derived from an apoptotic cell is coated by a membrane enriched with phosphatidylserines which are negatively charged molecules. Phosphatidylserines are present in all cells but they are usually tucked away on the inside of cells and are not seen by other cells. When a cell undergoes apoptosis, phosphatidylserines are flipped inside out. When particles or cell fragments present high levels of phosphatidylserines on their outer membranes then a phagocyte knows that it is encountering the remains of a formerly functioning cell that needs to be cleared by phagocytosis.
However, it turns out that not all membranes rich in phosphatidylserines are remains of apoptotic cells. Recent research studies suggest that certain viruses invade cells, replicate within the cell and when they exit their diseased host cell, they cloak themselves in membranes rich in phosphatidylserines. How the viruses precisely appropriate the phosphatidylserines of a cell that is not yet apoptotic and then adorn their viral membranes with the cell's "Eat Me!" signal is not yet fully understood and a very exciting area of research at the interface of virology, immunology and the biology of cell death.
What happens when the newly synthesized viral particles leave the infected cell? Because these viral particles are coated in phosphatidylserine, professional phagocytes such as macrophages or non-professional phagocytes such as fibroblasts or epithelial cells will assume they are encountering phosphatidylserine-rich dead cell debris and ingest it in their roles as diligent garbage collectors. This ingestion of the viral particles has at least two great benefits for the virus: First and foremost, it allows the virus entry into a new host cell which it can then convert into another virus-producing factory. Entering cells usually requires specific receptors by which viruses gain access to selected cell types. This is why many viruses can only infect certain cell types because not all cells have the receptors that allow for viral entry. However, when viruses hijack the apoptotic debris phagocytosis mechanism then the phagocytic cell is "inviting" the viral particle inside, assuming that it is just dead debris. But there is perhaps an even more insidious advantage for the virus. During clearance of apoptotic cells, certain immune pathways are suppressed by the phagocytes in order to pre-emptively dampen excessive inflammation that might be caused by the debris. It is therefore possible that by pretending to be fragments of dead cells, viruses coated with phosphatidylserines may also suppress the immune response of the infected host, thus evading detection and destruction by the immune systems.
Viruses for which this process of apoptotic mimicry has been described include the deadly Ebola virus or the Dengue virus, each using its own mechanism to create its fake mask of death. The Ebola virus buds directly from the fat-rich outer membrane of the infected host cell in the form of elongated, thread-like particles coated with the cell's phosphatidylserines. The Dengue virus, on the other hand, is synthesized and packaged inside the cell and appears to purloin the cell's phosphatidylserines during its synthesis long before it even reaches the cell's outer membrane. As of now, it appears that viruses from at least nine distinct families of viruses use the apoptotic mimicry strategy but the research on apoptotic mimicry is still fairly new and it is likely that scientists will discover many more viruses which rely on this and similar evolutionary strategies to evade the infected host's immune response and spread throughout the body.
Uncovering the phenomenon of apoptotic mimicry gives new hope in the battle against viruses for which we have few targeted treatments. In order to develop feasible therapies, it is important to precisely understand the molecular mechanisms by which the hijacking occurs. One cannot block all apoptotic clearance in the body because that would have disastrous consequences due to the buildup of legitimate apoptotic debris that needs to be cleared. However, once scientists understand how viruses concentrate phosphatidylserines or other "Eat Me!" signals in their membranes, it may be possible to specifically uncloak these renegade viruses without compromising the much needed clearance of conventional cell debris.
Elliott, M. R. and Ravichandran, K.S. "Clearance of apoptotic cells: implications in health and disease" The Journal of Cell Biology 189.7 (2010): 1059-1070.
Amara, A and Mercer, J. "Viral apoptotic mimicry." Nature Reviews Microbiology (2015).
Monday, June 22, 2015
The Long Shadow of Nazi Indoctrination: Persistence of Anti-Semitism in Germany
by Jalees Rehman
Anti-Semitism and the holocaust are among the central themes in the modern German secondary school curriculum. During history lessons in middle school, we learned about anti-Semitism and the persecution of Jews in Europe during the middle ages and early modernity. Our history curriculum in the ninth and tenth grades focused on the virulent growth of anti-Semitism in 20th century Europe, how Hitler and the Nazi party used anti-Semitism as a means to rally support and gain power, and how the Nazi apparatus implemented the systematic genocide of millions of Jews.
In grades 11 to 13, the educational focus shifts to a discussion of the broader moral and political context of anti-Semitism and Nazism. How could the Nazis enlist the active and passive help of millions of "upstanding" citizens to participate in this devastating genocide? Were all Germans who did not actively resist the Nazis morally culpable or at least morally responsible for the Nazi horrors? Did Germans born after the Second World War inherit some degree of moral responsibility for the crimes committed by the Nazis? How can German society ever redeem itself after being party to the atrocities of the Nazis? Anti-Semitism and Nazism were also important topics in our German literature and art classes because the Nazis persecuted and murdered German Jewish intellectuals and artists, and because the shame and guilt experienced by Germans after 1945 featured so prominently in German art and literature.
One purpose of extensively educating Germany school-children about this dark and shameful period of German history is the hope that if they are ever faced with the reemergence of prejudice directed against Jews or any other ethnic or religious group, they will have the courage to stand up for those who are being persecuted and make the right moral choices. As such, it is part of the broader Vergangenheitsbewältigung (wrestling with one's past) in post-war German society which takes place not only in schools but in various public venues. The good news, according to recent research published in the Proceedings of the National Academy of Sciences by Nico Voigtländer and Hans-Joachim Voth, is that Germans who attended school after the Second World War have shown a steady decline in anti-Semitism. The bad news: Vergangenheitsbewältigung is a bigger challenge for Germans who attended school under the Nazis because a significant proportion of them continue to exhibit high levels of anti-Semitic attitudes more than half a century after the defeat of Nazi Germany.
Voigtländer and Voth examined the results of the large General Social Survey for Germany (ALLBUS) in which several thousand Germans were asked about their values and beliefs. The survey took place in 1996 and 2006, and the researchers combined the results of both surveys with a total of 5,300 participants from 264 German towns and cities. The researchers were specifically interested in anti-Semitic attitudes and focused on three survey questions specifically related to anti-Semitism. Survey participants were asked to respond on a scale of 1 to 7 and indicate whether they thought Jews had too much influence in the world, whether Jews were responsible for their own persecution and whether Jews should have equal rights. The researchers categorized participants as "committed anti-Semites" if they revealed anti-Semitic attitudes to all three questions. The overall rate of committed anti-Semites was 4% in Germany but there was significant variation depending on the geographical region and the age of the participants.
Germans born in the 1970s and 1980s had only 2%-3% committed anti-Semites whereas the rate was nearly double for Germans born in the 1920s (6%). However, the researchers noted one exception: Germans born in the 1930s. Those citizens had the highest fraction of anti-Semites: 10%. The surveys were conducted in 1996 and 2006 when the participants born in in the 1930s were 60-75 years old. In other words, one out of ten Germans of that generation did not think that Jews deserved equal rights!
The researchers attributed this to the fact that people born in the 1930s were exposed to the full force of systematic Nazi indoctrination with anti-Semitic views which started as early as in elementary school and also took place during extracurricular activities such as the Hitler Youth programs. The Nazis came to power in 1933 and immediately began implementing a whole-scale propaganda program in all schools. A child born in 1932, for example, would have attended elementary school and middle school as well as Hitler Youth programs from age six onwards till the end of the war in 1945 and become inculcated with anti-Semitic propaganda.
The researchers also found that the large geographic variation in anti-Semitic prejudices today was in part due to the pre-Nazi history of anti-Semitism in any given town. The Nazis were not the only and not the first openly anti-Semitic political movement in Germany. There were German political parties with primarily anti-Jewish agendas which ran for election in the late 19th century and early 20th century. Voigtländer and Voth analyzed the votes that these anti-Semitic parties received more than a century ago, from 1890 to 1912. Towns and cities with the highest support for anti-Semitic parties in this pre-Nazi era are also the ones with the highest levels of anti-Semitic prejudice today. When children were exposed to anti-Semitic indoctrination in schools under the Nazis, the success of these hateful messages depended on how "fertile" the ground was. If the children were growing up in towns and cities where family members or public figures had supported anti-Jewish agenda during prior decades then there was a much greater likelihood that the children would internalize the Nazi propaganda. The researchers cite the memoir of the former Hitler Youth member Alfons Heck:
"We who were born into Nazism never had a chance unless our parents were brave enough to resist the tide and transmit their opposition to their children. There were few of those."
- Alfons Heck in "The Burden of Hitler's Legacy"
The researchers then address the puzzling low levels of anti-Semitic prejudices among Germans born in the 1920s. If the theory of the researcher were correct that anti-Semitic prejudices persist today because Nazi school indoctrination then why aren't Germans born in the 1920s more anti-Semitic? A child born in 1925 would have been exposed to Nazi propaganda throughout secondary school. Oddly enough, women born in the 1920s did show high levels of anti-Semitism when surveyed in 1996 and 2006 but men did not. Voigtländer and Voth solve this mystery by reviewing wartime fatality rates. The most zealous male Nazi supporters with strong anti-Semitic prejudices were more likely to volunteer for the Waffen-SS, the military wing of the Nazi party. Some SS divisions had an average age of 18 and these SS-divisions had some of the highest fatality rates. This means that German men born in the 1920s weren't somehow immune to Nazi propaganda. Instead, most of them perished because they bought into it and this is why we now see lower levels of anti-Semitism than expected in Germans born during that decade.
A major limitation of this study is its correlational nature and the lack of data on individual exposure to Nazism. The researchers base their conclusions on birth years and historical votes for anti-Semitic parties of towns but did not track how much individuals were exposed to anti-Semitic propaganda in their schools or their families. Such a correlational study cannot establish a cause-effect relationship between propaganda and the persistence of prejudice today. One factor not considered by the researchers, for example, is that Germans born in the 1930s are also among those who grew up as children in post-war Germany, often under conditions of extreme poverty and even starvation.
Even without being able to establish a clear cause-effect relationship, the findings of the study raise important questions about the long-term effects of racial propaganda. It appears that a decade of indoctrination may give rise to a lifetime of hatred. Our world continues to be plagued by prejudice against fellow humans based on their race or ethnicity, religion, political views, gender or sexual orientation. Children today are not subject to the systematic indoctrination implemented by the Nazis but they are probably still exposed to more subtle forms of prejudice and we do not know much about its long-term effects. We need to recognize the important role of public education in shaping the moral character of individuals and ensure that our schools help our children become critical thinkers with intact moral reasoning, citizens who can resist indoctrination and prejudice.
Voigtländer N and Voth HJ. "Nazi indoctrination and anti-Semitic beliefs in Germany" Proceedings of the National Academy of Sciences (2015), doi: 10.1073/pnas.1414822112
Artificially Flavored Intelligence
"I see your infinite form in every direction,
with countless arms, stomachs, faces, and eyes."
~ Bhagavad-Gītā 11 16
About ten days ago, someone posted on an image on Reddit, a sprawling site that is the Internet's version of a clown car that's just crashed into a junk shop. The image, appropriately uploaded to the 'Creepy' corner of the website, is kind of hard to describe, so, assuming that you are not at the moment on any strong psychotropic substances, or are not experiencing a flashback, please have a good, long look before reading on.
What the hell is that thing? Our sensemaking gear immediately kicks into overdrive. If Cthulhu had had a pet slug, this might be what it looked like. But as you look deeper into the picture, all sorts of other things begin to emerge. In the lower left-hand corner there are buildings and people, and people sitting on buildings which might themselves be on wheels. The bottom center of the picture seems to be occupied by some sort of a lurid, lime-colored fish. In the upper right-hand corner, half-formed faces peer out of chalices. The background wallpaper evokes an unholy copulation of brain coral and astrakhan fur. And still there are more faces, or at least eyes. There are indeed more eyes than an Alex Grey painting, and they hew to none of the neat symmetries that make for a safe world. In fact, the deeper you go into the picture, the less perspective seems to matter, as solid surfaces dissolve into further cascades of phantasmagoria. The same effect applies to the principal thing, which has not just an indeterminate number of eyes, ears or noses, but even heads.
The title of the thread wasn't very helpful, either: "This image was generated by a computer on its own (from a friend working on AI)". For a few days, that was all anyone knew, but it was enough to incite another minor-scale freakout about the nature and impending arrival of Our Computer Overlords. Just as we are helpless to not over-interpret the initial picture, so we are all too willing to titillate ourselves with alarmist speculations concerning its provenance. This was presented as a glimpse into the psychedelic abyss of artificial intelligence; an unspeakable, inscrutable intellect briefly showed us its cards, and it was disquieting, to put it mildly. Is that what AI thinks life looks like? Or stated even more anxiously, is that what AI thinks life should look like?
Alas, our giddy Lovecraftian fantasies weren't allowed to run amok for more than a few days, since the boffins at Google tipped their hand with a blog post describing what was going on. The image, along with many others, were the result of a few engineers playing around with neural networks, and seeing how far they could push them. In this case, a neural network is ‘trained' to recognize something when it is fed thousands of instances of that thing. So if the engineers want to train a neural network to recognize the image of a dog, they will keep feeding it images of the same, until it acquires the ability to identify dogs in pictures it hasn't seen before. For the purposes of this essay, I'll just leave it at that, but here is a good explanation of how neural networks ‘learn'.
The networks in question were trained to recognize animals, people and architecture. But things got interesting when the Google engineers took a trained neural net and fed it only one input – over and over again. Once slightly modified, the image was then re-submitted to the network. If it were possible to imagine the network having a conversation with itself, it may go something like this:
First pass: Ok, I'm pretty good at finding squirrels and dogs and fish. Does this picture have any of these things in it? Hmmm, no, although that little blob looks like it might be the eye of one of those animals. I'll make a note of that. Also that lighter bit looks like fur. Yeah. Fur.
Second pass: Hey, that blob definitely looks like an eye. I'll sharpen it up so that it's more eye-like, since that's obviously what it is. Also, that fur could look furrier.
Third pass: That eye looks like it might go with that other eye that's not that far off. That other dark bit in between might just be the nose that I'd need to make it a dog. Oh wow – it is a dog! Amazing.
The results are essentially thousands of such decisions made across dozens of layers of the network. Each layer of ‘neurons' hands over its interpretation to the next layer up the hierarchy, and a final decision of what to emphasize or de-emphasize is made by the last layer. The fact that half of a squirrel's face may be interpellated within the features of the dog's face is, in the end, irrelevant.
But I also feel very wary about having written this fantasy monologue, since framing the computational process as a narrative is something that makes sense to us, but in fact isn't necessarily true. By way of comparison, the philosopher Jacques Derrida was insanely careful about stating what he could claim in any given act of writing, and did so while he was writing. Much to the consternation of many of his readers, this act of deconstructing the text as he was writing it was nevertheless required for him to be accurate in making his claims. Similarly, while the anthropomorphic cheat is perhaps the most direct way of illustrating how AI ‘works', it is also very seductive and misleading. I offer up the above with the exhortation that there is no thinking going on. There is no goofy conversation. There is iteration, and interpretation, and ultimately but entirely tangentially, weirdness. The neural network doesn't think it's weird, however. The neural network doesn't think anything, at least not in the overly generous way in which we deploy that word.
So, echoing a deconstructionist approach, we would claim that the idea of ‘thinking' is really the problem. It is a sort of absent center, where we jam in all the unexamined assumptions that we need in order to keep the system intact. Once we really ask what we mean by ‘thinking' then the whole idea of intelligence, whether we are speaking of our own human one, let alone another's, becomes strange and unwhole. So if we then try to avoid the word – and therefore the idea behind the word – ‘thinking' as ascribed to a computer program, then how ought we think about this? Because – sorry – we really don't have a choice but to think about it.
I believe that there are more accurate metaphors to be had, ones that rely on narrower views of our subjectivity, not the AI's. For example, there is the children's game of telephone, where a phrase is whispered from one ear to the next. Given enough iterations, what emerges is a garbled, nonsensical mangling of the original, but one that is hopefully still entertaining. But if it amuses, this is precisely because it remains within the realm of language. The last person does not recite a random string of alphanumeric characters. Rather, our drive to recognize patterns, also known as apophenia, yields something that can still be spoken. It is just weird enough, which is a fine balance indeed.
What did you hear? To me, it sounds obvious that a female voice is repeating "no way" to oblivion. But other listeners have variously reported window, welcome, love me, run away, no brain, rainbow, raincoat, bueno, nombre, when oh when, mango, window pane, Broadway, Reno, melting, or Rogaine.
This illustrates the way that our expectations shape our perception…. We are expecting to hear words, and so our mind morphs the ambiguous input into something more recognisable. The power of expectation might also underlie those embarrassing situations where you mishear a mumbled comment, or even explain the spirit voices that sometimes leap out of the static on ghost hunting programmes.
Even more radical are Steve Reich's tape loop pieces, which explore the effects of when a sound gradually goes out of phase with itself. In fact, 2016 will be the 50th anniversary of "Come Out", one of the seminal explorations of this idea. While the initial phrase is easy to understand, as the gap in phase widens we struggle to maintain its legibility. Not long into the piece, the words are effectively erased, and we find ourselves swimming in waves of pure sound. Nevertheless, our mental apparatus stills seeks to make some sort of sense of it all, it's just that the patterns don't obtain for long enough in order for a specific interpretation to persist.
Of course, the list of contraptions meant to isolate and provoke our apophenic tendencies is substantial, and oftentimes touted as having therapeutic benefits. We slide into sensory deprivation tanks to gape at the universe within, and assemble mail-order DIY ‘brain machines' to ‘expand our brain's technical skills'. This is mostly bunk, but all are predicated on the idea that the brain will produce its own stimuli when external ones are absent, or if there is only a narrow band of stimulus available. In the end, what we experience here is not so much an epiphany, as apophany.
In effect, what Google's engineers have fabricated is an apophenic doomsday machine. It does one thing – search for patterns in the ways in which it knows how – and it does those things very, very well. A neural network trained to identify animals will not suddenly begin to find architectural features in a given input image. It will, if given the picture of a building façade, find all sorts of animals that, in its judgment, already lurk there. The networks are even capable of teasing out the images with which they are familiar if given a completely random picture – the graphic equivalent of static. These are perhaps the most compelling images of all. It's the equivalent of putting a neural network in an isolation tank. But is it? The slide into anthropomorphism is so effortless.
And although the Google blog post isn't clear on this, I suspect that there is also no clear point at which the network is ‘finished'. An intrinsic part of thinking is knowing when to stop, whereas iteration needs some sort of condition wrapped around the loop, otherwise it will never end. You don't tell a computer to just keep adding numbers, you tell it to add only the first 100 numbers you give it. Otherwise the damned thing won't stop. The engineers ran the iterations up until a certain point, and it doesn't really matter if that point was determined by a pre-existing test condition (eg, ‘10,000 iterations') or a snap aesthetic judgment (eg, ‘This is maximum weirdness!'). The fact is that human judgment is the wrapper around the process that creates these images. So if we consider that a fundamental feature of thinking is knowing when to stop doing so, then we find this trait lacking in this particular application of neural networks.
In addition to knowing when to stop, there is another critical aspect of thinking as we know it, and that is forgetting. In ‘Funes el memorioso', Jorge Luis Borges speculated on the crippling consequences of a memory so perfect that nothing was ever lost. Among other things, the protagonist Funes can only live a life immersed in an ocean of detail, "incapable of general, platonic ideas". In order to make patterns, we have to privilege one thing over another, and dismiss vast quantities of sensory information as irrelevant, if not outright distracting or even harmful.
Interestingly enough, this relates to a theory concerning the nature of the schizophrenic mind (in a further nod to the deconstructionist tendency, I concede that the term ‘schizophrenia' is not unproblematic, but allow me the assumption). The ‘hyperlearning hypothesis' claims that schizophrenic symptoms can arise from a surfeit of dopamine in the brain. As a key neurotransmitter, dopamine plays a crucial role in memory formation:
When the brain is rewarded unexpectedly, dopamine surges, prompting the limbic "reward system" to take note in order to remember how to replicate the positive experience. In contrast, negative encounters deplete dopamine as a signal to avoid repeating them. This is a key learning mechanism which is also involves memory-formation and motivation. Scientists believe the brain establishes a new temporary neural network to process new stimuli. Each repetition of the same experience triggers the identical neural firing sequence along an identical neural journey, with every duplication strengthening the synaptic links among the neurons involved. Neuroscientists say, "Neurons that fire together wire together." If this occurs enough times, a secure neural network is established, as if imprinted, and the brain can reliably access the information over time.
The hyperlearning hypothesis posits that schizophrenics have too much dopamine in their brains, too much of the time. Take the process described above and multiply it by orders of magnitude. The result is a world that a schizophrenic cannot make sense of, because literally everything is important, or no one thing is less important than anything else. There is literally no end to thinking, no conditional wrapper to bring anything to a conclusion.
Unsurprisingly, the artificial neural networks discussed above are modeled on precisely this process of reinforcement, except that the dopamine is replaced by an algorithmic stand-in. In 2011, Uli Grasemann and Risto Miikkulainen did the logical next step: they took a neural network called DISCERN and cranked up its virtual dopamine.
Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information – not as distinct units, but as statistical relationships of words, sentences, scripts and stories.
In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate -- essentially telling it to stop forgetting so much.
After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.
Even though I find this infinitely more terrifying than a neural net's ability to create a picture of a multi-headed dog-slug-squirrel, I still contend that there is no thinking going on, as we would like to imagine it. And we would very much like to imagine it: even the article cited above has as its headline ‘Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain'. It's almost as if schizophrenia is something you can pack into a syringe, virtual or otherwise, and inject it into the neural network of your choice, virtual or otherwise. (The actual peer-reviewed article is more soberly titled ‘Using computational patients to evaluate illness mechanisms in schizophrenia'.) We would be much better off understanding these neural networks as tools that provide us with a snapshot of a particular and narrow process. They are no more anthropomorphic than the shapes that clouds may suggest to us on a summer's afternoon. But we seem incapable of forgetting this. If we cannot learn to restrain our relentless pattern-seeking, consider what awaits us on the other end of the spectrum: it is not coincidental that the term ‘apophenia' was coined in 1958 by Klaus Conrad in a monograph on the inception of schizophrenia.
Monday, May 25, 2015
The “Invisible Web” Undermines Health Information Privacy
by Jalees Rehman
"The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize. What matters here is the framework— or the procedure— rather than the outcome or the substance. Limits and constraints, in other words, can be productive— even if the entire conceit of "the Internet" suggests otherwise.
Evgeny Morozov in "To Save Everything, Click Here: The Folly of Technological Solutionism"
We cherish privacy in health matters because our health has such a profound impact on how we interact with other humans. If you are diagnosed with an illness, it should be your right to decide when and with whom you share this piece of information. Perhaps you want to hold off on telling your loved ones because you are worried about how it might affect them. Maybe you do not want your employer to know about your diagnosis because it could get you fired. And if your bank finds out, they could deny you a mortgage loan. These and many other reasons have resulted in laws and regulations that protect our personal health information. Family members, employers and insurances have no access to your health data unless you specifically authorize it. Even healthcare providers from two different medical institutions cannot share your medical information unless they can document your consent.
The recent study "Privacy Implications of Health Information Seeking on the Web" conducted by Tim Libert at the Annenberg School for Communication (University of Pennsylvania) shows that we have a for more nonchalant attitude regarding health privacy when it comes to personal health information on the internet. Libert analyzed 80,142 health-related webpages that users might come across while performing online searches for common diseases. For example, if a user uses Google to search for information on HIV, the Center for Disease Control and Prevention (CDC) webpage on HIV/AIDS (http://www.cdc.gov/hiv/) is one of the top hits and users will likely click on it. The information provided by the CDC will likely provide solid advice based on scientific results but Libert was more interested in investigating whether visits to the CDC website were being tracked. He found that by visiting the CDC website, information of the visit is relayed to third-party corporate entities such as Google, Facebook and Twitter. The webpage contains "Share" or "Like" buttons which is why the URL of the visited webpage (which contains the word "HIV") is passed on to them – even if the user does not explicitly click on the buttons.
Libert found that 91% of health-related pages relay the URL to third parties, often unbeknownst to the user, and in 70% of the cases, the URL contains sensitive information such as "HIV" or "cancer" which is sufficient to tip off these third parties that you have been searching for information related to a specific disease. Most users probably do not know that they are being tracked which is why Libert refers to this form of tracking as the "Invisible Web" which can only be unveiled when analyzing the hidden http requests between the servers. Here are some of the most common (invisible) partners which participate in the third-party exchanges:
Entity Percent of health-related pages
What do the third parties do with your data? We do not really know because the laws and regulations are rather fuzzy here. We do know that Google, Facebook and Twitter primarily make money by advertising so they could potentially use your info and customize the ads you see. Just because you visited a page on breast cancer does not mean that the "Invisible Web" knows your name and address but they do know that you have some interest in breast cancer. It would make financial sense to send breast cancer related ads your way: books about breast cancer, new herbal miracle cures for cancer or even ads by pharmaceutical companies. It would be illegal for your physician to pass on your diagnosis or inquiry about breast cancer to an advertiser without your consent but when it comes to the "Invisible Web" there is a continuous chatter going on in the background about your health interests without your knowledge.
Some users won't mind receiving targeted ads. "If I am interested in web pages related to breast cancer, I could benefit from a few book suggestions by Amazon," you might say. But we do not know what else the information is being used for. The appearance of the data broker Experian on the third-party request list should serve as a red flag. Experian's main source of revenue is not advertising but amassing personal data for reports such as credit reports which are then sold to clients. If Experian knows that you are checking out breast cancer pages then you should not be surprised if this information will be stored in some personal data file about you.
How do we contain this sharing of personal health information? One obvious approach is to demand accountability from the third parties regarding the fate of your browsing history. We need laws that regulate how information can be used, whether it can be passed on to advertisers or data brokers and how long the information is stored.
We may use information we collect about you to:
· Administer your account;
· Provide you with access to particular tools and services;
· Respond to your inquiries and send you administrative communications;
· Obtain your feedback on our sites and our offerings;
· Statistically analyze user behavior and activity;
· Provide you and people with similar demographic characteristics and interests with more relevant content and advertisements;
· Conduct research and measurement activities;
· Send you personalized emails or secure electronic messages pertaining to your health interests, including news, announcements, reminders and opportunities from WebMD; or
· Send you relevant offers and informational materials on behalf of our sponsors pertaining to your health interests.
Perhaps one of the most effective solutions would be to make the "Invisible Web" more visible. If health-related pages were mandated to disclose all third-party requests in real-time such as pop-ups ("Information about your visit to this page is now being sent to Amazon") and ask for consent in each case, users would be far more aware of the threat to personal privacy posed by health-related pages. Such awareness of health privacy and potential threats to privacy are routinely addressed in the real world and there is no reason why this awareness should not be extended to online information.
Libert, Tim. "Privacy implications of health information seeking on the Web" Communications of the ACM, Vol. 58 No. 3, Pages 68-77, March 2015, doi: 10.1145/2658983 (PDF)
Monday, April 27, 2015
Murder Your Darling Hypotheses But Do Not Bury Them
by Jalees Rehman
"Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings."
Sir Arthur Quiller-Couch (1863–1944). On the Art of Writing. 1916
Murder your darlings. The British writer Sir Arthur Quiller Crouch shared this piece of writerly wisdom when he gave his inaugural lecture series at Cambridge, asking writers to consider deleting words, phrases or even paragraphs that are especially dear to them. The minute writers fall in love with what they write, they are bound to lose their objectivity and may not be able to judge how their choice of words will be perceived by the reader. But writers aren't the only ones who can fall prey to the Pygmalion syndrome. Scientists often find themselves in a similar situation when they develop "pet" or "darling" hypotheses.
How do scientists decide when it is time to murder their darling hypotheses? The simple answer is that scientists ought to give up scientific hypotheses once the experimental data is unable to support them, no matter how "darling" they are. However, the problem with scientific hypotheses is that they aren't just generated based on subjective whims. A scientific hypothesis is usually put forward after analyzing substantial amounts of experimental data. The better a hypothesis is at explaining the existing data, the more "darling" it becomes. Therefore, scientists are reluctant to discard a hypothesis because of just one piece of experimental data that contradicts it.
In addition to experimental data, a number of additional factors can also play a major role in determining whether scientists will either discard or uphold their darling scientific hypotheses. Some scientific careers are built on specific scientific hypotheses which set apart certain scientists from competing rival groups. Research grants, which are essential to the survival of a scientific laboratory by providing salary funds for the senior researchers as well as the junior trainees and research staff, are written in a hypothesis-focused manner, outlining experiments that will lead to the acceptance or rejection of selected scientific hypotheses. Well written research grants always consider the possibility that the core hypothesis may be rejected based on the future experimental data. But if the hypothesis has to be rejected then the scientist has to explain the discrepancies between the preferred hypothesis that is now falling in disrepute and all the preliminary data that had led her to formulate the initial hypothesis. Such discrepancies could endanger the renewal of the grant funding and the future of the laboratory. Last but not least, it is very difficult to publish a scholarly paper describing a rejected scientific hypothesis without providing an in-depth mechanistic explanation for why the hypothesis was wrong and proposing alternate hypotheses.
For example, it is quite reasonable for a cell biologist to formulate the hypothesis that protein A improves the survival of neurons by activating pathway X based on prior scientific studies which have shown that protein A is an activator of pathway X in neurons and other studies which prove that pathway X improves cell survival in skin cells. If the data supports the hypothesis, publishing this result is fairly straightforward because it conforms to the general expectations. However, if the data does not support this hypothesis then the scientist has to explain why. Is it because protein A did not activate pathway X in her experiments? Is it because in pathway X functions differently in neurons than in skin cells? Is it because neurons and skin cells have a different threshold for survival? Experimental results that do not conform to the predictions have the potential to uncover exciting new scientific mechanisms but chasing down these alternate explanations requires a lot of time and resources which are becoming increasingly scarce. Therefore, it shouldn't come as a surprise that some scientists may consciously or subconsciously ignore selected pieces of experimental data which contradict their darling hypotheses.
Let us move from these hypothetical situations to the real world of laboratories. There is surprisingly little data on how and when scientists reject hypotheses, but John Fugelsang and Kevin Dunbar at Dartmouth conducted a rather unique study "Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory" in 2004 in which they researched researchers. They sat in at scientific laboratory meetings of three renowned molecular biology laboratories at carefully recorded how scientists presented their laboratory data and how they would handle results which contradicted their predictions based on their hypotheses and models.
In their final analysis, Fugelsang and Dunbar included 417 scientific results that were presented at the meetings of which roughly half (223 out of 417) were not consistent with the predictions. Only 12% of these inconsistencies lead to change of the scientific model (and thus a revision of hypotheses). In the vast majority of the cases, the laboratories decided to follow up the studies by repeating and modifying the experimental protocols, thinking that the fault did not lie with the hypotheses but instead with the manner how the experiment was conducted. In the follow up experiments, 84 of the inconsistent findings could be replicated and this in turn resulted in a gradual modification of the underlying models and hypotheses in the majority of the cases. However, even when the inconsistent results were replicated, only 61% of the models were revised which means that 39% of the cases did not lead to any significant changes.
The study did not provide much information on the long-term fate of the hypotheses and models and we obviously cannot generalize the results of three molecular biology laboratory meetings at one university to the whole scientific enterprise. Also, Fugelsang and Dunbar's study did not have a large enough sample size to clearly identify the reasons why some scientists were willing to revise their models and others weren't. Was it because of varying complexity of experiments and models? Was it because of the approach of the individuals who conducted the experiments or the laboratory heads? I wish there were more studies like this because it would help us understand the scientific process better and maybe improve the quality of scientific research if we learned how different scientists handle inconsistent results.
In my own experience, I have also struggled with results which defied my scientific hypotheses. In 2002, we found that stem cells in human fat tissue could help grow new blood vessels. Yes, you could obtain fat from a liposuction performed by a plastic surgeon and inject these fat-derived stem cells into animal models of low blood flow in the legs. Within a week or two, the injected cells helped restore the blood flow to near normal levels! The simplest hypothesis was that the stem cells converted into endothelial cells, the cell type which forms the lining of blood vessels. However, after several months of experiments, I found no consistent evidence of fat-derived stem cells transforming into endothelial cells. We ended up publishing a paper which proposed an alternative explanation that the stem cells were releasing growth factors that helped grow blood vessels. But this explanation was not as satisfying as I had hoped. It did not account for the fact that the stem cells had aligned themselves alongside blood vessel structures and behaved like blood vessel cells.
Even though I "murdered" my darling hypothesis of fat –derived stem cells converting into blood vessel endothelial cells at the time, I did not "bury" the hypothesis. It kept ruminating in the back of my mind until roughly one decade later when we were again studying how stem cells were improving blood vessel growth. The difference was that this time, I had access to a live-imaging confocal laser microscope which allowed us to take images of cells labeled with red and green fluorescent dyes over long periods of time. Below, you can see a video of human bone marrow mesenchymal stem cells (labeled green) and human endothelial cells (labeled red) observed with the microscope overnight. The short movie compresses images obtained throughout the night and shows that the stem cells indeed do not convert into endothelial cells. Instead, they form a scaffold and guide the endothelial cells (red) by allowing them to move alongside the green scaffold and thus construct their network. This work was published in 2013 in the Journal of Molecular and Cellular Cardiology, roughly a decade after I had been forced to give up on the initial hypothesis. Back in 2002, I had assumed that the stem cells were turning into blood vessel endothelial cells because they aligned themselves in blood vessel like structures. I had never considered the possibility that they were scaffold for the endothelial cells.
This and other similar experiences have lead me to reformulate the "murder your darlings" commandment to "murder your darling hypotheses but do not bury them". Instead of repeatedly trying to defend scientific hypotheses that cannot be supported by emerging experimental data, it is better to give up on them. But this does not mean that we should forget and bury those initial hypotheses. With newer technologies, resources or collaborations, we may find ways to explain inconsistent results years later that were not previously available to us. This is why I regularly peruse my cemetery of dead hypotheses on my hard drive to see if there are ways of perhaps resurrecting them, not in their original form but in a modification that I am now able to test.
Fugelsang, Jonathan A.; Stein, Courtney B.; Green, Adam E.; Dunbar, Kevin N. (2004) "Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory" Canadian Journal of Experimental Psychology Vol 58(2) 86-95.http://dx.doi.org/10.1037/h0085799
Monday, March 30, 2015
STEM Education Promotes Critical Thinking and Creativity: A Response to Fareed Zakaria
by Jalees Rehman
All obsessions can be dangerous. When I read the title "Why America's obsession with STEM education is dangerous" of Fareed Zakaria's article in the Washington Post, I assumed that he would call for more balance in education. An exclusive focus on STEM (science, technology, engineering and mathematics) is unhealthy because students miss out on the valuable knowledge that the arts and humanities teach us. I would wholeheartedly agree with such a call for balance because I believe that a comprehensive education makes us better human beings. This is the reason why I encourage discussions about literature and philosophy in my scientific laboratory. To my surprise and dismay, Zakaria did not analyze the respective strengths of liberal arts education and STEM education. Instead, his article is laced with odd clichés and misrepresentations of STEM.
Misrepresentation #1: STEM teaches technical skills instead of critical thinking and creativity
If Americans are united in any conviction these days, it is that we urgently need to shift the country's education toward the teaching of specific, technical skills. Every month, it seems, we hear about our children's bad test scores in math and science — and about new initiatives from companies, universities or foundations to expand STEM courses (science, technology, engineering and math) and deemphasize the humanities.
"The United States has led the world in economic dynamism, innovation and entrepreneurship thanks to exactly the kind of teaching we are now told to defenestrate. A broad general education helps foster critical thinking and creativity."
Zakaria is correct when he states that a broad education fosters creativity and critical thinking but his article portrays STEM as being primarily focused on technical skills whereas liberal education focuses on critical thinking and creativity. Zakaria's view is at odds with the goals of STEM education. As a scientist who mentors Ph.D students in the life sciences and in engineering, my goal is to help our students become critical and creative thinkers.
Students learn technical skills such as how to culture cells in a dish, insert DNA into cells, use microscopes or quantify protein levels but these technical skills are not the focus of the educational program. Learning a few technical skills is easy but the real goal is for students to learn how to develop innovative scientific hypotheses, be creative in terms of designing experiments that test those hypotheses, learn how to be critical of their own results and use logic to analyze their experiments.
My own teaching and mentoring experience focuses on STEM graduate students but the STEM programs that I have attended at elementary and middle schools also emphasize teaching basic concepts and critical thinking instead of "technical skills". The United States needs to promote STEM education because of the prevailing science illiteracy in the country and not because it needs to train technically skilled worker bees. Here are some examples of science illiteracy in the US: Fort-two percent of Americans are creationists who believe that God created humans in their present form within the last 10,000 years or so. Fifty-two percent of Americans are unsure whether there is a link between vaccines and autism and six percent are convinced that vaccines can cause autism even though there is broad consensus among scientists from all over the world that vaccines do NOT cause autism. And only sixty-one percent are convinced that there is solid evidence for global warming.
A solid STEM education helps citizens apply critical thinking to distinguish quackery from true science, benefiting their own well-being as well as society.
Zakaria's criticism of obsessing about test scores is spot on. The subservience to test scores undermines the educational system because some teachers and school administrators may focus on teaching test-taking instead of critical thinking and creativity. But this applies to the arts and humanities as well as the STEM fields because language skills are also assessed by standardized tests. Just like the STEM fields, the arts and humanities have to find a balance between teaching required technical skills (i.e. grammar, punctuation, test-taking strategies, technical ability to play an instrument) and the more challenging tasks of teaching students how to be critical and creative.
Misrepresentation #2: Japanese aren't creative
Zakaria's views on Japan are laced with racist clichés:
"Asian countries like Japan and South Korea have benefitted enormously from having skilled workforces. But technical chops are just one ingredient needed for innovation and economic success. America overcomes its disadvantage — a less-technically-trained workforce — with other advantages such as creativity, critical thinking and an optimistic outlook. A country like Japan, by contrast, can't do as much with its well-trained workers because it lacks many of the factors that produce continuous innovation."
Some of the most innovative scientific work in my own field of scientific research – stem cell biology – is carried out in Japan. Referring to Japanese as "well-trained workers" does not do justice to the innovation and creativity in the STEM fields and it also conveniently ignores Japanese contributions to the arts and humanities. I doubt that the US movie directors who have re-made Kurosawa movies or the literary critics who each year expect that Haruki Murakami will receive the Nobel Prize in Literature would agree with Zakaria.
Misrepresentation #3: STEM does not value good writing
Writing well, good study habits and clear thinking are important. But Zakaria seems to suggest that these are not necessarily part of a good math and science education:
"No matter how strong your math and science skills are, you still need to know how to learn, think and even write. Jeff Bezos, the founder of Amazon (and the owner of this newspaper), insists that his senior executives write memos, often as long as six printed pages, and begins senior-management meetings with a period of quiet time, sometimes as long as 30 minutes, while everyone reads the "narratives" to themselves and makes notes on them. In an interview with Fortune's Adam Lashinsky, Bezos said: "Full sentences are harder to write. They have verbs. The paragraphs have topic sentences. There is no way to write a six-page, narratively structured memo and not have clear thinking."
Communicating science is an essential part of science. Until scientific work is reviewed by other scientists and published as a paper it is not considered complete. There is a substantial amount of variability in the quality of writing among scientists. Some scientists are great at logically structuring their papers and conveying the core ideas whereas other scientific papers leave the reader in a state of utter confusion. What Jeff Bezos proposes for his employees is already common practice in the STEM world. In preparation for scientific meetings and discussions, scientists structure their ideas into outlines for manuscripts or grant proposals using proper paragraphs and sentences. Well-written scientific manuscripts are highly valued but the overall quality of writing in the STEM fields could be greatly improved. However, the same probably also holds true for people with a liberal arts education. Not every philosopher is a great writer. Decoding the human genome is a breeze when compared to decoding certain postmodern philosophical texts.
Misrepresentation #4: We should study the humanities and arts because Silicon Valley wants us to.
In support of his arguments for a stronger liberal arts education, Zakaria primarily quotes Silicon Valley celebrities such as Steve Jobs, Mark Zuckerberg and Jeff Bezos. The article suggests that a liberal arts education will increase entrepreneurship and protect American jobs. Are these the main reasons for why we need to reinvigorate liberal arts education? The importance of a general, balanced education makes a lot of sense to me but is increased job security a convincing argument for pursuing a liberal arts degree? Instead of a handful of anecdotal comments by Silicon Valley prophets, I would prefer to see some actual data that supports Zakaria's assertion. But perhaps I am being too STEMy.
There is a lot of room to improve STEM education. We have to make sure that we strive to focus on the essence of STEM which is critical thinking and creativity. We should also make a stronger effort to integrate arts and humanities into STEM education. In the same vein, it would be good to incorporate more STEM education into liberal arts education in order to combat scientific illiteracy. Instead of invoking "Two Cultures" scenarios and creating straw man arguments, educators of all fields need to collaborate in order to improve the overall quality of education.
Monday, March 23, 2015
You're on the Air!
by Carol A. Westbrook
The excitement of a live TV broadcast...a breaking news story...a presidential announcement...an appearance of the Beatles on Ed Sullivan. These words conjure up a time when all America would tune in to the same show, and families would gather round their TV set to watch it together.
This is not how we watch TV anymore. It is watched at different times and on different devices, from mobile phones, computers, mobile devices, from previously recorded shows on you DVR, or via streaming service such as Netflix and, soon, Apple. Live news can be viewed on the web, via cell phone apps, or as tweets. An increasing number of people are foregoing TV completely to get news and entertainment from other sources, with content that is never "on the air." (see the chart,below, from the Nov 24, 2013 Business Insider). Many Americans don't even own a television set!
We take it for granted that we will have instant access to video content--whether digital or analog, television, cell phone or iPad. But video itself has its roots in television; the word itself means, "to view over a distance." The story of TV broadcasting is a fascinating one about technology development, entrepreneurship, engineering, and even space exploration. It is an American story, and it is a story worth telling.
At first, America was tuned in to radio. From the early 20's through the 1940s, people would gather around their radios to listen to music and variety shows, serial dramas, news, and special announcements. Yet they dreamed of seeing moving pictures over the airwaves, like they did in newsreels and movies. A series of technical breakthroughs were needed to make this happen.
The first important breakthrough was the invention in 1938 of a way to send and view moving images electronically--Farnsworth's "television." Thus followed a series of patent wars, but at the end of the day, we had television sets which could be used to view moving pictures transmitted by the airwaves. In 1939, RCA televised the opening of the New York Worlds Fair, including a speech by the first President to appear on TV, President Franklin D. Roosevelt. There were few televisions to watch it on, though, until after the end of World War II, when America's demand for commercial television rapidly increased.
This led to the next big advance in television--network broadcasting. The big radio broadcast companies such as RCA (Radio Corporation of America) and CBS (Columbia Broadcasting System) naturally expanded into this media, but their infrastructure was limited. Though the frequencies used for AM radio transmission, from 540 to 1780 kHz (kHz means cycles per second) can travel long distances from their transmitting stations, each wavelength can only carry a limited amount of signal energy; in other words, it has a narrow bandwidth. Much higher frequency wavelengths, in the megahertz range (million cycles per second) are required for television so they can carry the additional information needed for picture as well as sound. As a result there was a scramble for higher frequency wavelengths, which was mediated by the FCC (Federal Communications Commission), the entity that regulates broadcasting. In 1948 the FCC allocated the higher frequency bands, designating which ones would be reserved for radio, and which ones for television, and and assigned channel numbers to the TV bands. The VHF television channels were designated 2 - 13. Channel 1 was reallocated to public and emergency communications, which explains why your TV starts with Channel 2! Several higher frequencies, designated as UHF, were reserved for later TV use, including channels 32 to 70. The FCC also froze the number of station licenses at 108 in 1948.
Because the number of broadcast stations was limited, TV was available only if you lived within range of a broadcast network, primarily CBS, NBC or ABC. In other words, if you lived a large city--New York, Chicago, Washington, Philadelphia, Boston, Los Angeles, Seattle or Salt Lake City. Outside of these areas, you might have a chance if you lived on a hill, put up a very high antenna, and prayed for a thermal inversion or a charged ionosphere to propagate the short signal to your television. My husband Rick, an electrical engineer and amateur radio buff, recounts that he watched the coronation of Queen Elizabeth in 1952 from his TV set in a small town in Pennsylvania, due to an environmental quirk (sunspots?), but everyone else had to wait for the films to cross the Atlantic and be shown on their local station.
Yet, for those of us who lived in a prime location, there was an ever-expanding number of programs to watch, such as the Texaco Star Theater, the Milton Bearle Show, and a variety of news shows. Many of us grew up on Howdy Doody, or shows created locally and televised live. I recall walking home from grade school for lunch as a child in Chicago, spending an hour watching "Lunchtime Little Theater," before returning to school to finish the afternoon's lessons! Many of these early shows have been lost, as they were never recorded, and video had not yet been invented.
Television broadcasting eventually went nationwide, thanks to microwave transmission, which developed out of WWII radar. This technology was used to relay television broadcasts to local affiliate stations, which could then broadcast them on their regular channels in the local area. Microwaves use point-to-point transmission, from one microwave tower to the next, and microwave towers were constructed to span the continent. The FCC increased the number of television station licenses, and the broadcast companies truly became "networks." Finally, everyone could watch the same shows at the same time.
But TV was still limited geographically--it could not cross the ocean. This problem was not solved until the third important technology was developed, that of satellite broadcasting. Sputnik, the first space satellite, was launched in 1957. Five years later, July 23, 1962, the first satellite-based transatlantic broadcast took place using the Telstar satellite to relay TV signals from the US ground station in Andover, Maine, to the receiving stations in Goonhilly Downs, England and Pleumeur-Bodou, France.
It's fun to watch this broadcast, which was introduced by Walter Cronkite, and began with a split screen showing the Statue of Liberty on the left and the Eiffel tower on the Right. The satellite transmission was followed a live broadcast of an ongoing baseball game in Chicago's Wrigley Field between the Philadelphia Phillies and the Chicago Cubs, and also included live remarks from President Kennedy, as well as footage from Cape Canaveral, Florida, Seattle, and Canada. I've included a short clip of the Kennedy broadcast.
If you looked up at the night in 1962, you might see the Telstar satellite zoom across your backyard sky. It took about 20 minutes to traverse, passing every 2.5 hours. Broadcast signals could only be transmitted to Telstar and back to land stations on either side of the Atlantic only during this 20-minute transit time, so the tracking satellite dishes had to be fast-moving; they also had to be very large to capture such a weak signal. It is impressive to see the massive size of the dishes in these satellite ground stations, and, and to imagine how quickly they had to move to sweep the sky. This picture of Goonhilly Downs gives you an idea of their size.
Although Telstar demonstrated that satellite transmission was possible for long-range broadcasting, the equipment and precision needed for tracking a rapidly-moving low-earth satellite was onerous. So the space scientists at NASA and Bell Labs launched the next generation of satellites, named "Syncom," into high earth orbit at just the right distance from the earth so that their speed matched the speed of the earth's rotation. When orbiting directly above the equator, the Syncom satellites appeared to be stationery over a single geographic location. Thus, the geostationary (or geosynchronous) satellite was born.
Stationery satellites paved the way for a tremendous expansion in telecommunications, and are still in widespread use. Satellites enabled the rise of cable TV networks such as HBO and CNN in the 1970s, which broadcast without having to go through FCC-regulated television transmitting stations. Instead, their programming was sent via satellite to the cable service, and from there selected programs went by cable to the TV of paid subscribers. These stations could also be accessed through Satellite TV subscription, such as Galaxy, which broadcast them directly to their customers' satellite dishes. Because early satellites could only carry a limited number of cable channels, multiple satellites had to be accessed to provide the purchased programming. Moveable satellite dishes of about four to twelve feet in diameter were positioned in the subscriber's yards or on their roof. Satellite TV further expanded American's access to television, reaching rural communities that had limited (or no) cable service and poor antenna reception; they also provided special paid programming, such as sports events watched at bars. This picture shows a 10-foot moveable dish in my yard in Indiana.
Stationery TV dishes--such as Direct TV antennas--were not feasible until satellites were able to carry more programming, so the dish could stay parked on only one geosynchronous satellite. The technical advance which allowed this was the development of digital video, in the late 1990's. Digital video would eventually displace analog-- remember when the DVD was introduced, which rendered VCRs obsolete in just a few years' time? Each genosynchronous satellites could now carry many more simultaneous channels than before, since each channel takes up only a small fraction of the bandwidth when compared to analog signals. Digital signals also increased the capacity of traditional TV, broadcast from ground towers, which eventually transferred to the HDTV standards, which broadcast at the high capacity UHF frequencies. The transition to HDTV was completed in June 2009, and the TV networks abandoned analog transmission on the old VHF channels, though many of the newer stations carry the old numbers (2 - 13). TV viewers are surprised to learn that they can watch their favorite channels on the newer HDTV sets using only a simple indoor antenna, and many are giving up their pricey cable services. Digital video signals were ready for growth in other media, as they theoretically be transmitted over the internet or by cell phone, and could be stored easily for re-broadcast.
Yet one more step was needed before widespread internet and cellular-based video could occur, allowing us to watch television programs as we do now. This was not a technical advance but an economic one--the sharp drop in the price of computer memory, which happened about 2009. Prior to that, computers had a lot less memory and storage capacity. Perhaps you remember the agony of trying to watch a YouTube video in its early years? Or of waiting for your browser to load? Now we take it for granted that we can view digitized images, create them, share them, watch pre-recorded programs, and record on our TIVO from multiple sources. There seems to be no limit to the ways that we can enjoy television, truly viewing "pictures at a distance." It is a far cry from the early years of television that many of us still remember, when we all watched a small, black-and-white screen with poor sound, to watch John, Paul, George and Ringo sing "I Love You." Now those were the days!
Thanks to my husband Rick Rikoski, for his patient and helpful explanations of the technology of television and its early development.
Monday, March 02, 2015
Does Thinking About God Increase Our Willingness to Make Risky Decisions?
by Jalees Rehman
There are at least two ways of how the topic of trust in God is broached in Friday sermons that I have attended in the United States. Some imams lament the decrease of trust in God in the age of modernity. Instead of trusting God that He is looking out for the believers, modern day Muslims believe that they can control their destiny on their own without any Divine assistance. These imams see this lack of trust in God as a sign of weakening faith and an overall demise in piety. But in recent years, I have also heard an increasing number of sermons mentioning an important story from the Muslim tradition. In this story, Prophet Muhammad asked a Bedouin why he was leaving his camel untied and thus taking the risk that this valuable animal might wander off and disappear. When the Bedouin responded that he placed his trust in God who would ensure that the animal stayed put, the Prophet told him that he still needed to first tie up his camel and then place his trust in God. Sermons referring to this story admonish their audience to avoid the trap of fatalism. Just because you trust God does not mean that it obviates the need for rational and responsible action by each individual.
It is much easier for me to identify with the camel-tying camp because I find it rather challenging to take risks exclusively based on the trust in an inscrutable and minimally communicative entity. Both, believers and non-believers, take risks in personal matters such as finance or health. However, in my experience, many believers who make a risky financial decision or take a health risk by rejecting a medical treatment backed by strong scientific evidence tend to invoke the name of God when explaining why they took the risk. There is a sense that God is there to back them up and provide some security if the risky decision leads to a detrimental outcome. It would therefore not be far-fetched to conclude that invoking the name of God may increase risk-taking behavior, especially in people with firm religious beliefs. Nevertheless, psychological research in the past decades has suggested the opposite: Religiosity and reminders of God seem to be associated with a reduction in risk-taking behavior.
Daniella Kupor and her colleagues at Stanford University have recently published the paper "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" which takes a new look at the link between invoking the name of God and risky behaviors. The researchers hypothesized that reminders of God may have opposite effects on varying types of risk-taking behavior. For example, risk-taking behavior that is deemed ‘immoral' such as taking sexual risks or cheating may be suppressed by invoking God, whereas taking non-moral risks, such as making risky investments or sky-diving, might be increased because reminders of God provide a sense of security. According to Kupor and colleagues, it is important to classify the type of risky behavior in relation to how society perceives God's approval or disapproval of the behavior. The researchers conducted a variety of experiments to test this hypothesis using online study participants.
One of the experiments involved running ads on a social media network and then assessing the rate of how often the social media users clicked on slightly different wordings of the ad texts. The researchers ran the ads 452,051 times on accounts registered to users over the age of 18 years residing in the United States. The participants either saw ads for non-moral risk-taking behavior (skydiving), moral risk-taking behavior (bribery) or a control behavior (playing video games) and each ad came either in a 'God version' or a standard version.
Here are the two versions of the skydiving ad (both versions had a picture of a person skydiving):
God knows what you are missing! Find skydiving near you. Click here, feel the thrill!
You don't know what you are missing! Find skydiving near you. Click here, feel the thrill!
The percentage of users who clicked on the skydiving ad in the ‘God version' was twice as high as in the group which saw the standard "You don't know what you are missing" phrasing! One explanation for the significantly higher ad success rate is that "God knows…." might have struck the ad viewers as being rather unusual and piqued their curiosity. Instead of this being a reflection of increased propensity to take risks, perhaps the viewers just wanted to find out what was meant by "God knows…". However, the response to the bribery ad suggests that it isn't just mere curiosity. These are the two versions of the bribery ad (both versions had an image of two hands exchanging money):
Learn How to Bribe!
God knows what you are missing! Learn how to bribe with little risk of getting caught!
Learn How to Bribe!
You don't know what you are missing! Learn how to bribe with little risk of getting caught!
In this case, the ‘God version' cut down the percentage of clicks to less than half of the standard version. The researchers concluded that invoking the name of God prevented the users from wanting to find out more about bribery because they consciously or subconsciously associated bribery with being immoral and rejected by God.
These findings are quite remarkable because they suggest that a a single mention of the word ‘God' in an ad can have opposite effects on two different types of risk-taking, the non-moral thrill of sky-diving versus the immoral risk of taking bribes.
Clicking on an ad for a potentially risky behavior is not quite the same as actually engaging in that behavior. This is why the researchers also conducted a separate study in which participants were asked to answer a set of questions after viewing certain colors. Participants could choose between Option 1 (a short 2 minute survey and receiving an additional 25 cents as a reward) or Option 2 (four minute survey, no additional financial incentive). The participants were also informed that Option 1 was more risky with the following label:
Eye Hazard: Option 1 not for individuals under 18. The bright colors in this task may damage the retina and cornea in the eyes. In extreme cases it can also cause macular degeneration.
In reality, neither of the two options was damaging to the eyes of the participants but the participants did not know this. This set-up allowed the researchers to assess the likelihood of the participants taking the risk of potentially injurious light exposure to their eyes. To test the impact of God reminders, the researchers assigned the participants to read one of two texts, both of which were adapted from Wikipedia, before deciding on Option 1 or Option 2:
Text used for participants in the control group:
"In 2006, the International Astronomers' Union passed a resolution outlining three conditions for an object to be called a planet. First, the object must orbit the sun; second, the object must be a sphere; and third, it must have cleared the neighborhood around its orbit. Pluto does not meet the third condition, and is thus not a planet."
Text used for the participants in the ‘God reminder' group:
"God is often thought of as a supreme being. Theologians have described God as having many attributes, including omniscience (infinite knowledge), omnipotence (unlimited power), omnipresence (present everywhere), and omnibenevolence (perfect goodness). God has also been conceived as being incorporeal (immaterial), a personal being, and the "greatest conceivable existent."
As hypothesized by the researchers, a significantly higher proportion of participants chose the supposedly harmful Option 1 in the ‘God reminder' group (96%) than in the control group (84%). Reading a single paragraph about God's attributes was apparently sufficient to lull more participants into the risk of exposing their eyes to potential harm. The overall high percentage of participants choosing Option 1 even in the control condition is probably due to the fact that it offered a greater financial reward (although it seems a bit odd that participants were willing to sell out their retinas for a quarter, but maybe they did not really take the risk very seriously).
A limitation of the study is that it does not provide any information on whether the impact of mentioning God was dependent on the religious beliefs of the participants. Do ‘God reminders' affect believers as well atheists and agnostics or do they only work in people who clearly identify with a religious tradition? Another limitation is that even though many of the observed differences between the ‘God condition' and the control conditions were statistically significant, the actual differences in numbers were less impressive. For example, in the sky-diving ad experiment, the click-through rate was about 0.03% in the standard ad and 0.06% in the ‘God condition'. This is a doubling but how meaningful is this doubling when the overall click rates are so low? Even the difference between the two groups who read the Wikipedia texts and chose Option 1 (96% vs. 84%) does not seem very impressive. However, one has to bear in mind that all of these interventions were very subtle – inserting a single mention of God into a social media ad or asking participants to read a single paragraph about God.
People who live in societies which are suffused with religion such as the United States or Pakistan are continuously reminded of God, whether they glance at their banknotes, turn on the TV or take a pledge of allegiance in school. If the mere mention of God in an ad can already sway some of us to increase our willingness to take risks, what impact does the continuous barrage of God mentions have on our overall risk-taking behavior? Despite its limitations, the work by Kupor and colleagues provides a fascinating new insight on the link between reminders of God and risk-taking behavior. By demonstrating the need to replace blanket statements regarding the relationship between God, religiosity and risk-taking with a more subtle distinction between moral and non-moral risky behaviors, the researchers are paving the way for fascinating future studies on how religion and mentions of God influence human behavior and decision-making.
Kupor DM, Laurin L, Levav J. "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" Psychological Science(2015) doi: 10.1177/0956797614563108
Monday, February 02, 2015
Literature and Philosophy in the Laboratory Meeting
by Jalees Rehman
Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group's recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one's peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.
During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.
My research group studies stem cells and tissue engineering, cellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the "big picture" of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.
This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a "teaser" without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren't any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term "culture of science" broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.
One member of our group introduced us to "For Whom the Bell Tolls" by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about "Sycamore Row" by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about "A Tale for a Time Being" by Ruth Ozeki and the difficulties that the book's protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was "Far From the Tree: Parents, Children and the Search for Identity" by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the "Deafness" chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being "disabled" or being "differently abled" and the challenges that parents and children face.
On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn's concept of paradigm shifts in "The Structure of Scientific Revolutions", Karl Popper's principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn's or Karl Popper's ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn't always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or "just" a technological advance?
It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.
Monday, January 05, 2015
Typical Dreams: A Comparison of Dreams Across Cultures
by Jalees Rehman
But I, being poor, have only my dreams;
I have spread my dreams under your feet;
Tread softly because you tread on my dreams.
William Butler Yeats – from "Aedh Wishes for the Cloths of Heaven"
Have you ever wondered how the content of your dreams differs from that of your friends? How about the dreams of people raised in different countries and cultures? It is not always easy to compare dreams of distinct individuals because the content of dreams depends on our personal experiences. This is why dream researchers have developed standardized dream questionnaires in which common thematic elements are grouped together. These questionnaires can be translated into various languages and used to survey and scientifically analyze the content of dreams. Open-ended questions about dreams might elicit free-form, subjective answers which are difficult to categorize and analyze. Therefore, standardized dream questionnaires ask study subjects "Have you ever dreamed of . . ." and provide research subjects with a list of defined dream themes such as being chased, flying or falling.
Dream researchers can also modify the questionnaires to include additional questions about the frequency or intensity of each dream theme and specify the time frame that the study subjects should take into account. For example, instead of asking "Have you ever dreamed of…", one can prompt subjects to focus on the dreams of the last month or the first memory of ever dreaming about a certain theme. Any such subjective assessment of one's dreams with a questionnaire has its pitfalls. We routinely forget most of our dreams and we tend to remember the dreams that are either the most vivid or frequent, as well as the dreams which we may have discussed with friends or written down in a journal. The answers to dream questionnaires may therefore be a reflection of our dream memory and not necessarily the actual frequency of prevalence of certain dream themes. Furthermore, standardized dream questionnaires are ideal for research purposes but may not capture the complex and subjective nature of dreams. Despite these pitfalls, research studies using dream questionnaires provide a fascinating insight into the dream world of large groups of people and identify commonalities or differences in the thematic content of dreams across cultures.
The researcher Calvin Kai-Ching Yu from the Hong Kong Shue Yan University used a Chinese translation of a standardized dream questionnaire and surveyed 384 students at the University of Hong Kong (mostly psychology students; 69% female, 31% male; mean age 21). Here are the results:
Ten most prevalent dream themes in a sample of Chinese students according to Yu (2008):
- Schools, teachers, studying (95%)
- Being chased or pursued (92 %)
- Falling (87 %)
- Arriving too late, e.g., missing a train (81 %)
- Failing an examination (79 %)
- A person now alive as dead (75%)
- Trying again and again to do something (74%)
- Flying or soaring through the air (74%)
- Being frozen with fright (71 %)
- Sexual experiences (70%)
The most prevalent theme was "Schools, teachers, studying". This means that 95% of the study subjects recalled having had dreams related to studying, school or teachers at some point in their lives, whereas only 70% of the subjects recalled dreams about sexual experiences. The subjects were also asked to rank the frequency of the dreams on a 5-point scale (0 = never, 1=seldom, 2= sometimes, 3= frequently, 4= very frequently). For the most part, the most prevalent dreams were also the most frequent ones. Not only did nearly every subject recall dreams about schools, teachers or studying, this theme also received an average frequency score of 2.3, indicating that for most individuals this was a recurrent dream theme – not a big surprise in university students. On the other hand, even though the majority of subjects (57%) recalled dreams of "being smothered, unable to breathe", its average frequency rating was low (0.9), indicating that this was a rare (but probably rather memorable) dream.
How do the dreams of the Chinese students compare to their counterparts in other countries?
Michael Schredl and his colleagues used a similar questionnaire to study the dreams of German university students (nearly all psychology students; 85% female, 15% male; mean age 24) with the following results:
Ten most prevalent dream themes in a sample of German students according to Schredl and colleagues (2004):
- Schools, teachers, studying (89 %)
- Being chased or pursued (89%)
- Sexual experiences (87 %)
- Falling (74 %)
- Arriving too late, e.g., missing a train (69 %)
- A person now alive as dead (68 %)
- Flying or soaring through the air (64%)
- Failing an examination (61 %)
- Being on the verge of falling (57 %)
- Being frozen with fright (56 %)
There is a remarkable overlap in the top ten list of dream themes among Chinese and German students. Dreams about school and about being chased are the two most prevalent themes for Chinese and German students. One key difference is that dreams about sexual experiences are recalled more commonly among German students.
Tore Nielsen and his colleagues administered a dream questionnaire to students at three Canadian universities, thus obtaining data on an even larger study population (over 1,000 students).
Ten most prevalent dream themes in a sample of Canadian students according to Nielsen and colleagues (2003):
- Being chased or pursued (82 %)
- Sexual experiences (77 %)
- Falling (74 %)
- Schools, teachers, studying (67 %)
- Arriving too late, e.g., missing a train (60 %)
- Being on the verge of falling (58 %)
- Trying again and again to do something (54 %)
- A person now alive as dead (54 %)
- Flying or soaring through the air (48%)
- Vividly sensing . . . a presence in the room (48 %)
It is interesting that dreams about school or studying were the most common theme among Chinese and German students but do not even make the top-three list among Canadian students. This finding is perhaps also mirrored in the result that dreams about failing exams are comparatively common in Chinese and German students, but are not found in the top-ten list among Canadian students.
At first glance, the dream content of German students seems to be somehow a hybrid between those of Chinese and Canadian students. Chinese and German students share a higher prevalence of academia-related dreams, whereas sexual dreams are among the most prevalent dreams for both Canadians and Germans. However, I did notice an interesting aberrancy. Chinese and Canadian students dream about "Trying again and again to do something" – a theme which is quite rare among German students. I have simple explanation for this (possibly influenced by the fact that I am German): Germans get it right the first time which is why they do not dream about repeatedly attempting the same task.
The strength of these three studies is that they used similar techniques to assess dream content and evaluated study subjects with very comparable backgrounds: Psychology students in their early twenties. This approach provides us with the unique opportunity to directly compare and contrast the dreams of people who were raised on three continents and immersed in distinct cultures and languages. However, this approach also comes with a major limitation. We cannot easily extrapolate these results to the general population. Dreams about studying and school may be common among students but they are probably rare among subjects who are currently holding a full-time job or are retired. University students are an easily accessible study population but they are not necessarily representative of the society they grow up in. Future studies which want to establish a more comprehensive cross-cultural comparison of dream content should probably attempt to enroll study subjects of varying ages, professions, educational and socio-economic backgrounds.
Despite its limitation, the currently available data on dream content comparisons across countries does suggest one important message: People all over the world have similar dreams.
Yu, Calvin Kai-Ching. "Typical dreams experienced by Chinese people." Dreaming 18.1 (2008): 1-10.
Nielsen, Tore A., et al. "The Typical Dreams of Canadian University Students." Dreaming 13.4 (2003): 211-235.
Schredl, Michael, et al. "Typical dreams: stability and gender differences." The Journal of psychology 138.6 (2004): 485-494.
Monday, December 08, 2014
Heat not Wet: Climate Change Effects on Human Migration in Rural Pakistan
by Jalees Rehman
In the summer of 2010, over 20 million people were affected by the summer floods in Pakistan. Millions lost access to shelter and clean water, and became dependent on aid in the form of food, drinking water, tents, clothes and medical supplies in order to survive this humanitarian disaster. It is estimated that at least $1.5 billion to $2 billion were provided as aid by governments, NGOs, charity organizations and private individuals from all around the world, and helped contain the devastating impact on the people of Pakistan. These floods crippled a flailing country that continues to grapple with problems of widespread corruption, illiteracy and poverty.
The 2011 World Disaster Report (PDF) states:
In the summer of 2010, giant floods devastated parts of Pakistan, affecting more than 20 million people. The flooding started on 22 July in the province of Balochistan, next reaching Khyber Pakhtunkhwa and then flowing down to Punjab, the Pakistan ‘breadbasket'. The floods eventually reached Sindh, where planned evacuations by the government of Pakistan saved millions of people.
However, severe damage to habitat and infrastructure could not be avoided and, by 14 August, the World Bank estimated that crops worth US$ 1 billion had been destroyed, threatening to halve the country's growth (Batty and Shah, 2010). The floods submerged some 7 million hectares (17 million acres) of Pakistan's most fertile croplands – in a country where farming is key to the economy. The waters also killed more than 200,000 head of livestock and swept away large quantities of stored commodities that usually fed millions of people throughout the year.
The 2010 floods were among the worst that Pakistan has experienced in recent decades. Sadly, the country is prone to recurrent flooding which means that in any given year, Pakistani farmers hope and pray that the floods will not be as bad as those in 2010. It would be natural to assume that recurring flood disasters force Pakistani farmers to give up farming and migrate to the cities in order to make ends meet. But a recent study published in the journal Nature Climate Change by Valerie Mueller at the International Food Policy Research Institute has identified the actual driver of migration among rural Pakistanis: Heat.
Mueller and colleagues analyzed the migration and weather patterns in rural Pakistan from 1991-2012 and found that flooding had a modest to insignificant effect on migration whereas extreme heat was clearly associated with migration. The researchers found that bouts of heat wiped out a third of the income derived through farming! In Pakistan, the average monthly rural household income is 20,000 rupees (roughly $200), which is barely enough to feed a typical household consisting of 6 or 7 people. It is no wonder that when heat stress reduces crop yields and this low income drops by one third, farming becomes untenable and rural Pakistanis are forced to migrate and find alternate means to feed their family. Mueller and colleagues also identified the group that was most likely to migrate: rural farmers who did not own the land they were farming. Not owning the land makes them more mobile, but compared to the land-owners, these farmers are far more vulnerable in terms of economic stability and food security when a heat wave hits. Migration may be the last resort for their continued survival.
It is predicted that the frequency and intensity of heat waves will increase during the next century. Research studies have determined that global warming is the major cause of heat waves, and an important recent study by Diego Miralles and colleagues published in Nature Geoscience has identified a key mechanism which leads to the formation of "mega heat waves". Dry soil and higher temperatures work as part of a vicious cycle, reinforcing each other. The researchers found that drying soil is a critical component.. During daytime, high temperatures dry out the soil. The dry soil traps the heat, thus creating layers of high temperatures even at night, when there is no sunlight. On the subsequent day, the new heat generated by sunlight is added on to the "trapped heat" by the dry soil, which creates an escalating feedback loop with progressively drying soil that becomes devastatingly effective at trapping heat. The result is a massive heat-wave which can wipe out crops, lead to water scarcity and also causes thousands of deaths.
The study by Mueller and colleagues provides important information on how climate change is having real-world effects on humans today. Climate change is a global problem, affecting humans all around the world, but its most severe and immediate impact will likely be borne by people in the developing world who are most vulnerable in terms of their food security. There is an obvious need to limit carbon emissions and thus curtail the progression of climate change. This necessary long-term approach to climate change has to be complemented by more immediate measures that help people cope with the detrimental effects of climate change by, for example, exploring ways to grow crops that are more heat resilient, and ensuring the food security of those who are acutely threatened by climate change.
As Mueller and colleagues point out, the floods in Pakistan have attracted significant international relief efforts whereas increasing temperatures and heat stress are not commonly perceived as existential threats, even though they can be just as devastating. Gradual increases in temperatures and heat waves are more insidious and less likely to be perceived as threats, whereas powerful images of floods destroying homes and personal narratives of flood survivors clearly identify floods as humanitarian disasters. The impacts of heat stress and climate change, on the other hand, are not so easily conveyed. Climate change is a complex scientific issue, relying on mathematical models and intrinsic uncertainties associated with these models. As climate change progresses, weather patterns will become even more erratic, thus making it even more challenging to offer specific predictions.
Climate change research and the translation of this research into pragmatic precautionary measures also face an uphill battle because of the powerful influence of the climate change denial lobby. Climate change deniers take advantage of the scientific complexity of climate change, and attempt to paralyze humankind in terms of climate change action by exaggerating the scientific uncertainties. In fact, there is a clear scientific consensus among climate scientists that human-caused climate change is very real and is already destroying lives and ecosystems around the world.
Helping farmers adapt to climate change will require more than financial aid. It is important to communicate the impact of climate change and offer specific advice for how farmers may have to change their traditional agricultural practices. A recent commentary in Nature by Tom Macmillan and Tim Benton highlighted the importance of engaging farmers in agricultural and climate change research. Macmillan and Benton pointed out that at least 10 million farmers have taken part in farmer field schools across Asia, Africa and Latin America since 1989 which have helped them gain knowledge and accordingly adapt their practices.
Pakistan will hopefully soon engage in a much-needed land reform in order to solve the social injustice and food insecurity that plagues the country. Five percent of large landholders in Pakistan own 64% of the total farmland, whereas 65% small farmers own only 15% of the land. About 67% of rural households own no land. Women own only 3% of the land despite sharing in 70% of agricultural activities! The land reform will be just a first step in rectifying social injustice in Pakistan. Involving Pakistani farmers – men and women alike - in research and education about innovative agricultural practices in the face of climate change will help ensure their long-term survival.
Mueller, Valerie, Clark Gray, and Katrina Kosec. "Heat stress increases long-term human migration in rural Pakistan." Nature Climate Change 4, no. 3 (2014): 182-185.
Monday, December 01, 2014
Do I Look Fat in These Genes?
by Carol A. Westbrook
Are you pleasantly plump? Rubinesque? Chubby? Weight-challenged? Or, to state it bluntly, just plain fat? Have you spent a lifetime being nagged to stop eating, start exercising and lose some weight? Have you been accused of lack of willpower, laziness, watching too much TV, overeating and compulsive behavior? If you are among the 55% of Americans who are overweight, take heart. You now have an excuse: blame it on your genes.
It seems obvious that obesity runs in families; fat people have fat children, who produce fat grandchildren. Scientific studies as early as the 1980's suggested that there was more to it than merely being overfed by fat, over-eating parents; the work suggested that fat families may be that way because they have genes in common. Dr. Albert J Stunkard, a pioneering researcher at the University of Pennsylvania who died this year, did much of this early work. Stunkard showed that the weight of adopted children was closer to that of their biologic parents than of their adoptive parents. Another of his studies investigated twins, and found that identical twins--those that had the same genes--had very similar levels of obesity, whereas the similarity between non-identical twins was no greater than that between their non-twin siblings. It was pretty clear to scientists by this time that there was likely to be one or more genes that determined your level of obesity.
In spite of the compelling evidence, it has been difficult to identify the actual genes that cause us to be overweight. This is due partly to the fact that lifestyle and environment are such strong influences on our weight that they can obscure the genetic effects, making it difficult to dissociate genetic from environmental effects. But the main reason it has been difficult to find the fat gene is because there is probably not just one gene for obesity, as is the case for other diseases such as ALS (Lou Gehrig's disease). There seem to be many forms of obesity, determined by an as yet unknown number of genes, so finding an individual gene is like looking for a needle in a haystack.
Earlier this year, a group of researchers succeeded in identifying one of these genes by focusing on a single form of obesity and studying only a small number of families. Their studies, published in the New England Journal of Medicine, reported a gene mutation which was shared by all of the obese members of the families. The mutated gene, DYRK1B, seems to be involved in initiating the growth of fat cells, and in moderating the effects of insulin. The people in these families who carried the gene mutation all had abdominal obesity beginning in childhood, severe hypertension, type 2 diabetes, and high blood triglyceride levels. They had a type of obesity known as "metabolic syndrome."
Metabolic syndrome is recognized by doctors as a combination of symptoms, including large waist size, high triglycerides (lipids), low LDL "good" cholesterol, high blood pressure, and high blood sugar. In order to meet the diagnosis of metabolic syndrome, you need to have any 3 of these 5 criteria. A person who has metabolic syndrome is five times as likely to develop diabetes, and twice as likely to develop heart disease, as someone who doesn't have it.
Metabolic syndrome is not a rare condition; in fact, it has been estimated that as many as 47 million Americans have it, though usually not as severely as the one carried by the families in the study, above. Many more Americans may actually carry a mutation in the DYRK1B gene, or in a related gene, but have not developed the symptoms... yet.
What is perplexing is why obesity continues to be on the increase in the US, despite the fact that our genetics couldn't have changed that much over the last decade or two. Clearly there is more to being fat than carrying a fat gene. As we are all aware, you have to eat to become overweight. The fault is not in our stars, it is in our diets. And our diets have changed quite a bit over the last few decades.
What's wrong with our diets? That, of course, is one of the most important health questions of today. Our diets have changed a lot over the last few decades, starting with the movement in the mid 1970's to cut down the fat that we eat, mistakenly thinking that fat was the cause of high cholesterol and lipid problems. This led to the widespread substitution of calories from fat with calories from carbohydrates, particularly high fructose corn syrup and related additives. Nowhere have the substitutions been more dramatic than in fast foods and prepared foods. A high carbohydrate diet is a disaster for someone who is at risk of metabolic syndrome; it is the quickest way to get fat.
As the number of fat people increases, we are starting to see increases in diabetes, hypertension, and knee replacements. Obesity is linked to 1 in 5 deaths in our country. Finding more of the genes that cause people to be overweight will help to identify those at risk, so they can take steps to prevent it. And better yet, these gene mutations may provide targets for the creation of drugs to reverse the condition. The pharmaceutical industry is very interested in finding these genes: imagine if you could produce a pill that 50% of the entire population would have to take every day, for the rest of their lives, to prevent them from being fat!
Sadly, we do not have this pill to reverse metabolic syndrome, at least not at the present time. So, like many other diseases that are sensitive to the foods we eat -- hypertension, diabetes, gluten-sensitivity, and so on--the answer is still in controlling the diet.
But take heart. Now you can relax, forget the accusations and stop
blaming yourself. Enjoy those Christmas cookies and holiday treats today. Your diet starts on January 1.
Monday, November 24, 2014
The continuing relevance of Immanuel Kant
by Emrys Westacott
Immanuel Kant (1724-1804) is widely touted as one of the greatest thinkers in the history of Western civilization. Yet few people other than academic philosophers read his works, and I imagine that only a minority of them have read in its entirety the Critique of Pure Reason, generally considered his magnum opus. Kantian scholarship flourishes, with specialized journals and Kant societies in several countries, but it is largely written by and for specialists interested in exploring subtleties and complexities in Kant's texts, unnoticed influences on his thought, and so on. Some of Kant's writing is notoriously difficult to penetrate, which is why we need scholars to interpret his texts for us, and also why, in two hundred years, he has never made it onto the New York Times best seller list. And some of the ideas that he considered central to his metaphysics–for instance, his views about space, time, substance, and causality–are widely held to have been superseded by modern physics.
So what is so great about Kant? How is his philosophy still relevant today? What makes his texts worth studying and his ideas worth pondering? These are questions that could occasion a big book. What follows is my brief two penn'th on Kant's contribution to modern ways of thinking. I am not suggesting that Kant was the first or the only thinker to put forward the ideas mentioned here, or that they exhaust what is valuable in his philosophy. My purpose is just to identify some of the central strains in his thought that remain remarkably pertinent to contemporary debates.
1. Kant recognized that in the wake of the scientific revolution, what we call "knowledge" needed to be reconceived. He held that we should restrict the concept of knowledge to scientific knowledge–that is, to claims that are, or could be, justified by scientific means.
2. He identified the hallmark of scientific knowledge as what can be verified by empirical observation (plus some philosophical claims about the framework within which such observations occur). Where this isn't possible, we don't have knowledge; we have, instead, either pseudo-science (e.g. astrology), or unrestrained speculation (e.g. religion).
3. He understood that both everyday life and scientific knowledge rests on, and is made orderly, by some very basic assumptions that aren't self-evident but can't be entirely justified by empirical observations. For instance, we assume that the physical world will conform to mathematical principles. Kant argues in the Critique of Pure Reason that our belief that every event has a cause is such an assumption; perhaps, also, our belief that effects follow necessarily from their causes; but many today reject his classification of such claims as "synthetic a priori." Regardless of whether one agrees with Kant's account of what these assumptions are, his justification of them is thoroughly modern since it is essentially pragmatic. They make science possible. More generally, they make the world knowable. Kant in fact argues that in their absence our experience from one moment to the next would not be the coherent and intelligible stream that it is.
4. Kant claims that nothing in our experience is just "given" to us in a pure form unadulterated by the way we think. Our cognitive apparatus is always both receptive and active. Variations on this theme have become commonplace in modern philosophy, psychology, anthropology, and linguistics. What we call "facts" or "data" are theory-laden or concept-laden. Hegel, Nietzsche, Sellars, and Kuhn are among those who have developed this insight. Some, like Hilary Putnam, take it further, arguing that so-called facts are value-laden since how we apply concepts like causality reflects our interests. As William James famously remarked, "the trail of the human serpent is over everything."
5. Kant never lost sight of the fact that while modern science is one of humanity's most impressive achievements, we are not just knowers: we are also agents who make choices and hold ourselves responsible for our actions. In addition, we have a peculiar capacity to be affected by beauty, and a strange inextinguishable sense of wonder about the world we find ourselves in. Feelings of awe, an appreciation of beauty, and an ability to make moral choices on the basis of rational deliberation do not constitute knowledge, but this doesn't mean they lack value. On the contrary. But a danger carried by the scientific understanding of the world is that its power and elegance may lead us to undervalue those things that don't count as science.
6. According to Kant, the very nature of science means that it is limited to certain kinds of understanding and explanation, and these will never satisfy us completely. For as he says in the first sentence of the Critique, human reason has this peculiarity: it is driven by its very nature to pose questions that it is incapable of answering. Now hardheaded types may dismiss out of hand as not worth asking any questions that don't admit of scientific answers. This, one imagines, is Mr. Spock's position, and possibly such an attitude will one day take over completely. But I suspect Kant is right on this matter for two reasons.
One reason is that in our search for explanations we find it hard to be content with brute contingency. If we ask, "Why did this happen?" we will not be satisfied with the answer, "It just did." If we ask, "Why are things this way?" we expect more than, "That's just the way things are." Yet however deep science penetrates into the origin of things or the nature of things, it never seems to eliminate that element of contingency, and it is hard to see how it ever can. Leibniz's question, "Why is there something rather than nothing?" will always be waiting.
A second reason, which I suspect is related to the first, is that some questions we pose probably can't be answered, yet we ask them anyway because they express an abiding sense of wonder, mystery, concern, gratitude or despair over the conditions of our existence. Why am I this particular subject of experience? Why am I alive now and not at some other time? What should I do with my life? Why do I love this person, and why is our love so important? Such thoughts may take the form of questions, but they are really expressions of amazement and perplexity. The feelings expressed fuel religion, poetry, music, and the other arts. They also often accompany experiences we think of as especially valuable or profound: for instance, being present at a birth or a death, feeling great love, witnessing heroism, or encountering overwhelming natural beauty.
Kant's introduced the concept of the "thing in itself" to refer to reality as it is independent of our experience of it and unstructured by our cognitive constitution. The concept was harshly criticized in his own time and has been lambasted by generations of critics since. A standard objection to the notion is that Kant has no business positing it given his insistence that we can only know what lies within the limits of possible experience. But a more sympathetic reading is to see the concept of the "thing in itself" as a sort of placeholder in Kant's system; it both marks the limits of what we can know and expresses a sense of mystery that cannot be dissolved, the sense of mystery that underlies our unanswerable questions. Through both of these functions it serves to keep us humble.
7. Kant reflected more deeply than anyone before him on the growing conflict between the emerging scientific picture of the world (including its account of human nature) and the conventional, non-scientific notions that inform the way we think about the world and ourselves in everyday life. Some of these conflicts were resolved fairly easily. Copernicus challenged the common view that the sun moved while the earth was stationary. Accepting this new idea did mean displacing the earth from the center of the universe–a significant shift–but after some initial resistance the new model came to be generally accepted. The old way of thinking was seen to be understandable, given how things appear, but false.
Some conflicts, however, were more troubling. Most people in Kant's Europe were Christians. Christianity posits a God who created the world and dispenses cosmic justice. Yet this hypothesis has no place within science since it cannot be tested by scientific means. Kant, who had no truck with organized religion but seems to have had some sort of religious belief, settled this problem by restricting the scope of the contestants. Science tells us how things are in the spatio-temporal world we inhabit and experience, and what it tells us counts as knowledge. Religion speculates about what lies beyond this world. Such speculations produce articles of faith that may help people live better lives, and in this way they may be valuable. But they don't constitute knowledge. In Kant's famous formulation, he "found it necessary to deny knowledge in order to make room for faith." This solution to the conflict between science and religion is pretty much the one that has become generally accepted in the West, particularly among intellectuals. Religion is granted its own turf just so long as it doesn't encroach onto science's turf by claiming to offer knowledge. Inevitably, though, as science's stock has risen continuously since Kant's time, religion's stock has fallen, at least in the most modernized societies and among the intelligentsia. In these quarters God continues to die, urged on by Richard Dawkins and co..
But the conflict that really exercised Kant was between determinism, which was very much part of the new scientific picture, and our belief that we have free will. This troubled him more because he was much more concerned with morality than with religion. For him, religion is virtually a handmaiden to morality: faith can help people be good. But it is our capacity for acting morally–doing something simply because we think it is the right thing to do, regardless of our own interests–is what ultimately gives our lives dignity and value. We only have this capacity, however, if we have free will. And determinism, which sees every event, including our choices and actions, as the predictable effect of prior causes or states of affairs, implies that free will is an illusion, just as the apparent motion of the sun turned out to be an illusion.
What to do? Kant does not try to find a place for free will within the scientific picture. He also rejects the approach favoured by Hume which involves redefining free will in a way that makes it compatible with determinism. Compatibilism in one form or another continues to be popular and is defended by eminent thinkers like Daniel Dennett, but Kant rejects it as a "wretched subterfuge." His way of dealing with the problem, as I see it, is to say that it can't be resolved. The opposition between the scientific picture and our self-conception as beings capable of radical autonomy simply won't go away.
Two centuries later the problem of free will remains one of those issues where the conflict between science and conventional everyday thinking is especially sharp. Much worthwhile work has been done on the problem, yet Kant's account of the dilemma seems to describe the present situation pretty well. On the one hand, we can't find a place for free will within the scientific description of a human being. On the other hand, we can't jettison the notion that we are ultimately responsible for some of our decisions. We assume this about ourselves and others every day in all our ordinary activities. Even the most hard-boiled determinists tend to assume, when they engage in debate, that they and their opponents have some degree of choice regarding what they believe, and that this choice can be influenced by reasons that don't operate in the same manner as physical causes. Kant pretty much tells us that we just have to live with this tension since we can neither prove we have free will nor live as if we don't.
Naturally, there are parts of Kant's philosophy that no longer seem especially relevant, and Kant, like everyone else, had his foibles, failings, and blind spots. But there is a tremendously impressive depth to his reflections on the problems that confront humanity with the onset of modernity. And there is also an extraordinary breadth to his thinking, for as a systematic philosopher he illuminates the connections between metaphysics, science, morality, art, religion, and everyday experience. Ultimately, what he offers goes well beyond the construction of arguments or the analysis of concepts: what he offers, to his own time and to ours, is a penetrating account of the human condition in the age of science.
 Now that indeterminacy as part of quantum theory is included in the scientific picture some philosophers have sought to defend the idea of free will as something that quantum indeterminacy makes possible. But this position does not enjoy wide support.
Monday, October 13, 2014
Moral Time: Does Our Internal Clock Influence Moral Judgments?
by Jalees Rehman
Does morality depend on the time of the day? The study "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior" published in October of 2013 by Maryam Kouchaki and Isaac Smith suggested that people are more honest in the mornings, and that their ability to resist the temptation of lying and cheating wears off as the day progresses. In a series of experiments, Kouchaki and Smith found that moral awareness and self-control in their study subjects decreased in the late afternoon or early evening. The researchers also assessed the degree of "moral disengagement", i.e. the willingness to lie or cheat without feeling much personal remorse or responsibility, by asking the study subjects to respond to questions such as "Considering the ways people grossly misrepresent themselves, it's hardly a sin to inflate your own credentials a bit" or "People shouldn't be held accountable for doing questionable things when they were just doing what an authority figure told them to do" on a scale from 1 (strongly disagree) to 7 (strongly agree). Interestingly, the subjects who strongly disagreed with such statements were the most susceptible to the morning morality effect. They were quite honest in the mornings but significantly more likely to cheat in the afternoons. On the other hand, moral disengagers, i.e. subjects who did not think that inflating credentials or following questionable orders was a big deal, were just as likely to cheat in the morning as they were in the afternoons.
Understandably, the study caused quite a bit of ruckus and became one of the most widely discussed psychology research studies in 2013, covered widely by blogs and newspapers such as the Guardian "Keep the mornings honest, the afternoons for lying and cheating" or the German Süddeutsche Zeitung "Lügen erst nach 17 Uhr" (Lying starts at 5 pm). And the findings of the study also raised important questions: Should organizations and businesses take the time of day into account when assigning tasks to employees which require high levels of moral awareness? How can one prevent the "moral exhaustion" in the late afternoon and the concomitant rise in the willingness to cheat? Should the time of the day be factored into punishments for unethical behavior?
One question not addressed by Kouchaki and Smith was whether the propensity to become dishonest in the afternoons or evenings could be generalized to all subjects or whether the internal time in the subjects was also a factor. All humans have an internal body clock – the circadian clock- which runs with a period of approximately 24 hours. The circadian clock controls a wide variety of physical and mental functions such as our body temperature, the release of hormones or our levels of alertness. The internal clock can vary between individuals, but external cues such as sunlight or the social constraints of our society force our internal clocks to be synchronized to a pre-defined external time which may be quite distinct from what our internal clock would choose if it were to "run free". Free-running internal clocks of individuals can differ in terms of their period (for example 23.5 hours versus 24.4 hours) as well as the phases of when individuals would preferably engage in certain behaviors. Some people like to go to bed early, wake up at 5 am or 6 am on their own even without an alarm clock and they experience peak levels of alertness and energy before noon. In contrast to such "larks", there are "owls" among us who prefer to go to bed late at night, wake up at 11 am, experience their peak energy levels and alertness in the evening hours and like to stay up way past midnight.
It is not always easy to determine our "chronotype" – whether we are "larks", "owls" or some intermediate thereof – because our work day often imposes its demands on our internal clocks. Schools and employers have set up the typical workday in a manner which favors "larks", with work days usually starting around 7am – 9am. In 1976, the researchers Horne and Östberg developed a Morningness-Eveningness Questionnaire to investigate what time of the day individuals would prefer to wake up, work or take a test if it was entirely up to them. They found that roughly 40% of the people they surveyed had an evening chronotype!
If Kouchaki and Smith's findings that cheating and dishonesty increases in the late afternoons applies to both morning and evening chronotype folks, then the evening chronotypes ("owls") are in a bit of a pickle. Their peak performance and alertness times would overlap with their propensity to be dishonest. The researchers Brian Gunia, Christopher Barnes and Sunita Sah therefore decided to replicate the Kouchaki and Smith study with one major modification: They not only assessed the propensity to cheat at different times of the day, they also measured the chronotypes of the study participants. Their recent paper ""The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day" confirms that Kouchaki and Smith findings that the time of the day influences honesty, but the observed effects differ among chronotypes.
After assessing the chronotypes of 142 participants (72 women, 70 men; mean age 30 years), the researchers randomly assigned them to either a morning session (7:00 to 8:30 am) or an evening session (12:00 am to 1:30 am). The participants were asked to report the outcome of a die roll; the higher the reported number, the more raffle tickets they would receive for a large prize, which served as an incentive to inflate the outcome of the roll. Since a die roll is purely random, one would expect that reported average of the die roll results would be similar across all groups if all participants were honest. Their findings: Morning people ("larks") tended to report higher die-roll numbers in the evening than in the morning – thus supporting the Kouchaki and Smith results- but evening people tended to report higher numbers in the morning than in the evening. This means that the morning morality effect and the idea of "moral exhaustion" towards the end of the day cannot be generalized to all. In fact, evening people ("owls") are more honest in the evenings.
Not so fast, say Kouchaki and Smith in a commentary published to together with the new paper by Gunia and colleagues. They applaud the new study for taking the analysis of daytime effects on cheating one step further by considering the chronotypes of the participants, but they also point out some important limitations of the newer study. Gunia and colleagues only included morning and evening people in their analysis and excluded the participants who reported an intermediate chronotype, i.e. not quite early morning "larks" and not true "owls". This is a valid criticism because newer research on chronotypes by Till Roenneberg and his colleagues at the University of Munich has shown that there is a Gaussian distribution of chronotypes. Few of us are extreme larks or extreme owls, most of us lie on a continuum. Roenneberg's approach to measuring chronotypes looks at the actual hours of sleep we get and distinguishes between our behaviors on working days and weekends because the latter may provide a better insight into our endogenous clock, unencumbered by the demands of our work schedule. The second important limitation identified by Kouchaki and Smith is that Gunia and colleagues used 12 am to 1:30 am as the "evening condition". This may be the correct time to study the peak performance of extreme owls and selected night shift workers but ascertaining cheating behavior at this hour is not necessarily relevant for the general workforce.
Neither the study by Kouchaki and Smith nor the new study by Gunia and colleagues provide us with a definitive answer as to how the external time of the day (the time according to the sun and our social environment) and the internal time (the time according to our internal circadian clock) affect moral decision-making. We need additional studies with larger sample sizes which include a broad range of participants with varying chronotypes as well as studies which assess moral decision-making not just at two time points but also include a range of time points (early morning, afternoon, late afternoon, evening, night, etc.). But the two studies have opened up a whole new area of research and their findings are quite relevant for the field of experimental philosophy, which uses psychological methods to study philosophical questions. If empirical studies are conducted with human subjects then researchers need to take into account the time of the day and the internal time and chronotype of the participants, as well as other physiological differences between individuals.
The exchange between Kouchaki & Smith and Gunia & colleagues also demonstrates the strength of rigorous psychological studies. Researcher group 1 makes a highly provocative assertion based on their data, researcher group 2 partially replicates it and qualifies it by introducing one new variable (chronotypes) and researcher group 1 then analyzes strengths and weaknesses of the newer study. This type of constructive criticism and dialogue is essential for high-quality research. Hopefully, future studies will be conducted to provide more insights into this question. By using the Roenneberg approach to assess chronotypes, one could potentially assess a whole continuum of chronotypes – both on working days and weekends – and also relate moral reasoning to the amount of sleep we get. Measurements of body temperature, hormone levels, brain imaging and other biological variables may provide further insight into how the time of day affects our moral reasoning.
Why is this type of research important? I think that realizing how dynamic moral judgment can be is a humbling experience. It is easy to condemn the behavior of others as "immoral", "unethical" or "dishonest" as if these are absolute pronouncements. Realizing that our own judgment of what is considered ethical or acceptable can vary because of our internal clock or the external time of the day reminds us to be less judgmental and more appreciative of the complex neurobiology and physiology which influence moral decision-making. If future studies confirm that the internal time (and possibly sleep deprivation) influences moral decision-making, then we need to carefully rethink whether the status quo of forcing people with diverse chronotypes into a compulsory 9-to-5 workday is acceptable. Few, if any, employers and schools have adapted their work schedules to accommodate chronotype diversity in human society. Understanding that individualized work schedules for people with diverse chronotypes may not only increase their overall performance but also increase their honesty might serve as another incentive for employers and schools to recognize the importance of chronotype diversity among individuals.
Brian C. Gunia, Christopher M. Barnes and Sunita Sah (2014) "The Morality of Larks and Owls: Unethical Behavior Depends on Chronotype as Well as Time of Day", Psychological Science (published online ahead of print on Oct 6, 2014).
Maryam Kouchaki and Isaac H. Smith (2014) "The Morning Morality Effect: The Influence of Time of Day on Unethical Behavior", Psychological Science 25(1) 95–102.
Till Roenneberg, Anna Wirz-Justice and Martha Merrow. (2003) "Life between clocks: daily temporal patterns of human chronotypes." Journal of Biological Rhythms 18:1: 80-90.
Monday, September 15, 2014
A Rank River Ran Through It
It says something about a city, I suppose, when there is heated debate over who first labeled it a dirty place. The phrase “dear dirty Dublin”, used as a badge of defiant honor in Ireland’s capital to this day, is often erroneously attributed to James Joyce. Joyce used the term in Dubliners (1914) a series of linked short stories about that city and its denizens. But the phase goes back at least to early nineteenth century and the literary circle surrounding Irish novelist Sydney Owenson (Lady Morgan) who remains best known for her novel The Wild Irish Girl (1806) which extols the virtues of wild Irish landscapes, and the wild, though naturally dignified, princess who lived there. Compared to the fresh wilderness of the Irish West, Dublin would have seemed dirty indeed.
The city into which I was born more than a century later was still a rough and tumble place. It was also heavily polluted. This was Dublin of the 1970s.
My earliest memories of the city center come from trips I took to my father’s office in Marlborough St, just north of the River Liffey which bisects the city. My father would take an eccentric route into the city, the “back ways” as he would call them, which though not getting us to the destination as promptly as he advertised, had the benefit of bringing us on a short tour of the city and its more unkempt quarters.
My father’s cars themselves were masterpieces of dereliction. Purchased when they were already in an advanced stage of decay, he would nurse them aggressively till their often fairly prompt demise. One car that he was especially proud of, a Volkswagen Type III fastback, which had its engine to the rear, developed transmission problems and its clutch failed. His repair consisted of a chord dangling over his shoulder and crossing the back seat into the engine. A tug at a precisely timed moment would shift the gears. A shoe, attached to the end of the chord and resting on my father’s shoulder, aided the convenient operation of this system. That car, like most the others in those less regulated times, was also a marvel of pollution generation, farting out clouds of blue-black exhaust which added to the billowy haze of leaded fumes issuing from the other disastrously maintained vehicles, all shuddering in and out of the city’s congested center at the beginning at end of each work day.
A route into the city that I especially liked took us west of the city center, and as we approached Christ Church Cathedral I would open the window to smell the roasting of the barley which emanated from the Guinness brewery in Liberties region of the city, down by the Liffey. Very promptly I would wind up the window again as we crossed over the bridge, since the reek of that river was legendarily bad.
The Irish playwright Brendan Behan wrote in his memoir Confessions of an Irish Rebel (1965), “Somebody once said that ‘Joyce has made of this river the Ganges of the literary world,’ but sometimes the smell of the Ganges of the literary world is not all that literary.”
Historically, the River Liffey received raw sewage from the city and though a medical report from the 1880s concluded that the Liffey was not “directly injurious to the health of the inhabitants” — in the opinion of these doctors crowded living and alcohol consumption were the main culprits — the report concluded nonetheless that the Liffey’s condition “is prejudicial to the interest of the city and the port of Dublin.” It was time to clear up the mess.
The smell of the Liffey like other polluted waterways came not just from the ingredients that spill into it, but also from algae that bloom upon the excess nutrients that both accompany the solid waste and that seeps into the water from the larger landscape. The death and sulfurous decay of those plants, contribute to those noisome aromas.
Despite the installation of a sewage system for the city in 1906 and its expansion in the 1940s and 1950s the smell of the river remained ripe as Brendan Behan attested. Even in the late 1970s the smell of the river persisted and was remarked upon in popular culture. The song “Summer in Dublin” by the band Bagatelle contains the lines, “I remember that summer in Dublin/And the Liffey it stank like hell.” It was a big hit in the summer of 1978.
So why did the smell persist? Part of the problem with the tenacity of the Liffey’s pollution, and its associated odors, is that the river is a tidal one. It ebbs and flows into polluted Dublin Bay into which raw sewage continued to be dumped long after the creation and expansion of municipal sewage treatment plants. The rancid smells of the River Liffey remained powerful as I was motored over it with my father in the 1970s.
On other occasions, this time with my mother, I would get to observe the streets of Dublin city at a leisurely pedestrian pace. She would take one of her six kids into the city on her Saturday morning shopping rounds and would walk the selected child into the ground. The footpaths of the city were strewn with litter — sweet wrappers, newspapers, paper bags, plastic bags, discarded fast-food, random scraps of paper, cigarette butts — dog feces dappled the curbs, vomit pooled in doorways, the narrow streets were car-congested, and at evening-time, snug on the smoke-belching bus trundling home, I’d watch the sun sinking, gloriously crimson, hazily defined, leaving behind the bituminously smoky atmosphere of Dublin for another day.
It seemed like there was no end in sight to Dublin’s pollution problem, but clearly the situation could not have been left to go on forever. And even if a nineteenth century medical commission was not impressed that Dublin’s environmental pollution, from the river at least, posed a grievous problem, nonetheless the ubiquitous squalor of the city was not conducive to the good health of the Dublin’s city. The stench of river, the garbage in the streets, the smog of the city had to be remediated. As one Reuters report from the autumn of 1988 reported: “A thick pall of smoke from thousands of coal fires has become trapped over Dublin in freezing, wind-free weather, leaving a million coughing Dubliners to face streets at midday so gloomy it looks as if night had already fallen.” The links between high levels of smog and increased death rates concerned the medical community and a spokesperson from a major Dublin hospital reported that "Even patients without respiratory complaints have been complaining about throat irritation and coughing." (Toronto Star).
So change eventually came, some of it, admittedly, compelled by European legislation, a reasonable price for Ireland’s economic union with Europe. Acting on the Air Pollution Act, 1987 the capital city was declared a smokeless zone in 1990. It became illegal to sell or distribute bituminous coal, the smokiest kind, in all parts of Dublin city and its suburbs. By the early 1990s the city had lost the aroma of soot and the Dublin sunset lost some of its luster, but, in compensation, its air quality dramatically improved. The smoke in Dublin city dropped from 192 mg per cubic meter of air in December, 1989, to a mere 48 microgrammes the following December.
The River Liffey is generally less aromatic these days, though it is still very much a polluted urban river. Massive improvements, including the building of a new treatment plant near the harbor about ten years ago, has reduced raw sewage both in the river and in Dublin Bay. That being said the levels of faecal coliform, that is, E coli, associated with human waste, remains "disturbingly excessive" in some stretches of the River Liffey. There are heavy odors emanating from the new plant, an expensive problem that will need to be resolved.
I glanced down at the river this past summer while I was visiting home and saw that garbage still bobs up and down in the tidal waters, or clings to the algae at its bricked-up banks, before being inexorably tugged out to sea.
Follow me on Twitter @DublinSoil for 140 character updates on my columns. Links to previous 3QD columns here.
Builders and Blocks - Engineering Blood Vessels with Stem Cells
by Jalees Rehman
Back in 2001, when we first began studying how regenerative cells (stem cells or more mature progenitor cells) enhance blood vessel growth, our group as well as many of our colleagues focused on one specific type of blood vessel: arteries. Arteries are responsible for supplying oxygen to all organs and tissues of the body and arteries are more likely to develop gradual plaque build-up (atherosclerosis) than veins or networks of smaller blood vessels (capillaries). Once the amount of plaque in an artery reaches a critical threshold, the oxygenation of the supplied tissues and organs becomes compromised. In addition to this build-up of plaque and gradual decline of organ function, arterial plaques can rupture and cause severe sudden damage such as a heart attack. The conventional approach to treating arterial blockages in the heart was to either perform an open-heart bypass surgery in which blocked arteries were manually bypassed or to place a tube-like "stent" in the blocked artery to restore the oxygen supply. The hope was that injections of regenerative cells would ultimately replace the invasive procedures because the stem cells would convert into blood vessel cells, form healthy new arteries and naturally bypass the blockages in the existing arteries.
As is often the case in biomedical research, this initial approach turned out to be fraught with difficulties. The early animal studies were quite promising and the injected cells appeared to stimulate the growth of blood vessels, but the first clinical trials were less successful. It was very difficult to retain the injected cells in the desired arteries or tissues, and even harder to track the fate of the cells. Which stem cells should be injected? Where should they be injected? How many? Can one obtain enough stem cells from an individual patient so that one could use his or her own cells for the cell therapy? How does one guide the injected cells to the correct location, and then guide the cells to form functional blood vessel structures? Would the stem cells of a patient with chronic diseases such as diabetes or high blood pressure be suitable for therapies, or would such a patient have to rely on stem cells from healthier individuals and thus risk the complication of immune rejection?
The complexity of blood-vessel generation became increasingly apparent, both when studying the biology of stem cells as well as when designing and conducting clinical trials. A large clinical study published in 2013 studied the impact of bone marrow cell injections in heart attack patients and concluded that these injections did not result in any sustained benefit for heart function. Other studies using injections of patients' own stem cells into their hearts had led to mild improvements in heart function, but none of these clinical studies came close to fulfilling the expectations of cardiovascular patients, physicians and researchers. The upside to these failed expectations was that it forced the researchers in the field of cardiovascular regeneration to rethink their goals and approaches.
One major shift in my own field of interest - the generation of new blood vessels – was to reevaluate the validity of relying on injections of cells. How likely was it that millions of injected cells could organize themselves into functional blood vessels? Injections of cells were convenient for patients because they would not require the surgical implantation of blood vessels, but was this attempt to achieve a convenient therapy undermining its success? An increasing number of laboratories began studying the engineering of blood vessels in the lab by investigating the molecular cues which regulate the assembly of blood vessel networks, identifying molecular scaffolds which would retain stem cells and blood vessel cells and combining various regenerative cell types to build functional blood vessels. This second wave of regenerative vascular medicine is engineering blood vessels which will have to be surgically implanted into patients. This means that it will be much harder to get approval to conduct such invasive implantations in patients than the straightforward injections which were conducted in the first wave of studies, but most of us who have now moved towards a blood vessel engineering approach feel that there is a greater likelihood of long-term success even if it may take a decade or longer till we obtain our first definitive clinical results.
The second conceptual shift which has occurred in this field is the realization that blood vessel engineering is not only important for treating patients with blockages in their arteries. In fact, blood vessel engineering is critical for all forms of tissue and organ engineering. In the US, more than 120,000 people are awaiting an organ transplant but only a quarter of them will receive an organ in any given year. The number of people in need of a transplant will continue to grow but the supply of organs is limited and many patients will unfortunately die while waiting for an organ which they desperately need. The advances in stem cell biology have made it possible to envision creating organs or organoids (functional smaller parts of an organ) which could help alleviate the need for organs. One thing that most organs and tissues need is a network of tiny blood vessels that permeate the whole tissue: small capillary networks. For example, a liver built out of liver cells could never function without a network of tiny blood vessels which supply the liver cells with metabolites and oxygen. From an organ engineering point of view, microvessel engineering is just as important as the building of functional arteries.
In one of our recent projects, we engineered functional human blood vessels by combining bone marrow derived stem cells with endothelial cells (the cells which coat the inside of all blood vessels). It turns out that stem cells do not become endothelial cells but instead release a molecular signal – the protein SLIT3- which instructs the endothelial cells to assemble into networks. Using a high resolution microscope, we watched this process in real-time over a course of 72 hours in the laboratory and could observe how the endothelial cells began lining up into tube-like structures in the presence of the bone marrow stem cells. The human endothelial cells were like building blocks, the human bone marrow stem cells were the builders "overseeing" the construction. When we implanted the assembled blood vessel structures into mice, we could see that they were fully functional, allowing mouse blood to travel through them without leaking or causing any other major problems (see image, taken from reference 3).
I am sure that SLIT3 is just one of many molecular cues released by the stem cells to assemble functional networks and there are many additional mechanisms which still need to be discovered. We still need to learn much more about which "builders" and which "building blocks" are best suited for each type of blood vessel that we want to construct. The fact that human fat tissue can serve as an important resource for obtaining adult stem cells("builders") is quite encouraging, but we still know very little about the overall longevity of the engineered vessels, the best way to implant them into patients, and the key molecular and biomechanical mechanisms which will be required to engineer organs with functional blood vessels. It will be quite some time until the first fully engineered organs will be implanted in humans, but the dizzying rate of progress suggests that we can be quite optimistic.
References and links:
1. An overview article in "The Scientist" which describes the importance of blood vessel engineering for organ engineering (open access – can be read free of charge):
J Rehman "Building Flesh and Blood", The Scientist (2014), 28(5):48-53
2. An unusual and abundant source of adult stem cells which promote the formation of blood vessels: Fat tissue obtained from individuals who undergo a liposuction! (open access – can be read free of charge)
J Rehman "The Power of Fat" Aeon Magazine (2014)
3. The study which describes how adult stem cells release a protein (SLIT3) which organizes blood vessel cells into functional networks (open access – can be read free of charge):
J.D. Paul et al., "SLIT3-ROBO4 activation promotes vascular network formation in human engineered tissue and angiogenesis in vivo" J Mol Cell Cardiol (2013), 64:124-31.
Monday, August 18, 2014
The Psychology of Procrastination: How We Create Categories of the Future
by Jalees Rehman
"Do not put your work off till tomorrow and the day after; for a sluggish worker does not fill his barn, nor one who puts off his work: industry makes work go well, but a man who puts off work is always at hand-grips with ruin." Hesiod in "The Works and Days"
Paying bills, filling out forms, completing class assignments or submitting grant proposals – we all have the tendency to procrastinate. We may engage in trivial activities such as watching TV shows, playing video games or chatting for an hour and risk missing important deadlines by putting off tasks that are essential for our financial and professional security. Not all humans are equally prone to procrastination, and a recent study suggests that this may in part be due to the fact thatthe tendency to procrastinate has a genetic underpinning. Yet even an individual with a given genetic make-up can exhibit a significant variability in the extent of procrastination. A person may sometimes delay initiating and completing tasks, whereas at other times that same person will immediately tackle the same type of tasks even under the same constraints of time and resources.
A fully rational approach to task completion would involve creating a priority list of tasks based on a composite score of task importance and the remaining time until the deadline. The most important task with the most proximate deadline would have to be tackled first, and the lowest priority task with the furthest deadline last. This sounds great in theory, but it is quite difficult to implement. A substantial amount of research has been conducted to understand how our moods, distractability and impulsivity can undermine the best laid plans for timely task initiation and completion. The recent research article "The Categorization of Time and Its Impact on Task Initiation" by the researchers Yanping Tu (University of Chicago) and Dilip Soman (University of Toronto) investigates a rather different and novel angle in the psychology of procrastination: our perception of the future.
Tu and Soman hypothesized that one reason for why we procrastinate is that we do not envision time as a linear, continuous entity but instead categorize future deadlines into two categories, the imminent future and the distant future. A spatial analogy to this hypothesized construct is how we categorize distances. A city located at a 400 kilometer distance may be considered as being spatially closer to us if it is located within the same state than another city which may be physically closer (e.g. only 300 kilometers away) but located in a different state. The categories "in my state" and "outside of my state" therefore interfere with the perception of the actual physical distance.
In an experiment to test their time category hypothesis, the researchers investigated the initiation of tasks by farmers in a rural community in India as part of a larger project aimed at helping farmers develop financial literacy and skills. The participants (n=295 male farmers) attended a financial literacy lecture. The farmers learned that they would receive a special financial incentive if they opened a bank account, completed the required paperwork and accumulated at least 5,000 rupees in the account within the next 6 months. The farmers were also told they could open an account with zero deposit and complete the paperwork immediately while a bank representative was present at the end of the lecture. Alternatively, they could open the bank account at any point in time later by going to the closest branch of the bank. These lectures were held in June 2010 as well as in July 2010. In both cases, the six-month deadline was explicitly stated as being in December 2010 (for the June lectures) and in January 2011 (for the July lectures). The researchers surmised that even though the farmers were given the same six-month period to open the account and save the money, the December 2010 deadline would be perceived as the imminent future or an extension of the present because it fell in the same calendar year (2010) as the lecture, whereas the January 2011 deadline would be perceived as a far-off date in the distant future because it would fall in the next calendar year.
The results of this experiment were quite astounding: 32% of the farmers with the December 2010 deadline immediately opened the bank account whereas only 8% of the farmers with the January 2011 deadline followed suit. The contrast was even starker when it came to actually completing the whole task and saving the required money. 28% of the farmers with the December 2010 deadlines succeeded whereas only 4% of the farmers with the January 2011 deadline were successful. Even though both groups were given the same timeframe to complete the task (exactly six months) the same-year group had a six-to-seven fold higher success rate!
To test whether their idea of time categorization into the "like-the-present" future and the distant future could be generalized, the researchers conducted additional studies with students at the University of Toronto and the University of Chicago. These experiments yielded similar results, but also revealed that the distinction between "like-the-present" and the distant future is not only tied to the end of the calendar year but can also occur at the end of the month. Participants who were asked in April to complete a task with a deadline on April 30th indicated a far greater willingness to initiate the task than those with a deadline of May 1st, presumably because the April group thought of the deadline being an extension of the present (the month of April).
One of the most interesting experiments in their set of studies was the investigation of whether one could tweak the temporal perception of a deadline by providing visual cues which link the future date to the present. Tu and Soman conducted the study on March 9, 2011 (a Wednesday) and told participants that the study was about judging actions. The text provided to the participants read,
"Any action can be described in many ways; however the appropriateness of these descriptions may largely depend on the occasion on which the action occurs. In today's study, we are interested in your judgment of the appropriateness of descriptions of several actions. Please pick the one that you think is most appropriate in the occasion that is given to you in this study."
The researchers then showed the participants a calendar of March 2011 and told them that all the given actions would occur on March 13, 2011 (a Sunday). But the participants were divided into two groups, half of whom received a calendar in which the whole week was highlighted in one color, thus emphasizing that the Sunday deadline belonged to the same week ("like-the-present group"). The control group received a standard calendar in which the week-ends were colored differently from working days. The participants were provided with a list of 25 tasks and given two options for how they would describe each task. The two options reflected either a hands-on implementation approach versus more abstract approach. For example, for the task of "Caring for houseplants", they could choose between the hands-on option "Watering plants" or the more abstract option "Making the room look nice". Participants who saw the calendar in which the whole week (including Sunday) was depicted in the same color were significantly more likely to choose implementation options, suggesting that the visual cue was prepping their mind to think in terms of already implementing the tasks.
The work by Tu and Soman makes a strong case for the idea that we think of the future in categories and that this has a major impact on our tendency to procrastinate and take charge and expediently initiate and complete tasks. However, the work does have some limitations such as the fact that the researchers did not investigate whether the initial categorization is modified over time and whether specific reminders can help change the categorization. For example, if the farmers with the January 2011 deadline were to be approached again in the beginning of January 2011, would they then re-evaluate the "remote future" deadline and now consider it to be a "like-the-present" deadline that needs to be addressed immediately? Another limitation of the research article is that it does not explicitly state the ethical review of the studies, such as whether the farmers in India knew that their data was being used for a behavioral research study and whether provided informed consent.
This research provides fascinating insights into the science of procrastination and raises a number of important questions about how one should set deadlines. If the deadline is too far in the future, there is a much greater likelihood of thinking of it as a remote entity which may end up being ignored. If we want to ensure that tasks are initiated and completed in a timely manner, it may be important to emphasize the proximity of the deadline using visual cues (colors of calendars) or explicitly emphasizing the "like-the-present" nature such as stating "the deadline is in 30 days" instead of just mentioning a deadline date. The researchers did not study the impact of a countdown clock, but perhaps a countdown may be one way to help individuals build a cognitive bridge between the present and a looming deadline. Hopefully, government agencies, universities, corporations and other institutions which heavily rely on deadlines will pay attention to this research and re-evaluate how to convey deadlines in a manner which will reduce procrastination.
Yanping Tu and Dilip Soman (2014) "The Categorization of Time and Its Impact on Task Initiation" Journal of Consumer Research (published online on August 13, 2014 ahead of print).
Monday, August 11, 2014
How to say "No" to your doctor: improving your health by decreasing your health care
by Carol A. Westbrook
Has your doctor ever said to you, "You have too many doctors and are taking too many pills. It's time to cut back on both"? No? Well I have. Maybe it's time you brought it up with your doctors, too.
Do you really need a dozen pills a day to keep you alive, feeling well, and happy? Can you even afford them? Is it possible that the combination of meds that you are taking is making you feel worse, not better? Are you using up all of your sick leave and vacation time to attend multiple doctors' visits? Are you paying way much out of pocket for office visits and pharmacy co-pays, in spite of the fact that you have very good insurance? If this applies to you, then read on.
I am not referring to those of you with serious or chronic medical conditions, such as cancer, diabetes, and heart disease, who really do need those life-saving medicines and frequent clinic visits. I am referring here to the average healthy adult, who has no major medical problems, yet is taking perhaps twice as many prescription drugs and seeing multiple doctors 3 - 4 times as often as he would have done ten or fifteen years ago. Is he any healthier for it?
There is no doubt that modern medical care has made a tremendous impact on keeping us healthy and alive. The average life expectancy has increased dramatically over the last half century, from about 67 years in 1950 to almost 78 years today, and those who live to age 65 can expect to have, on average, almost 18 additional years to live! Some of this is due to lifestyle changes but most of the gain is due to advances in medical care, especially in two areas: cardiac disease and infectious diseases, especially in the treatment of AIDS. Cancer survival is just starting to make an impact as well. But how much additional longevity can we expect to gain by piling even more medical care on healthy individuals?
Too much health care can lower rather than improve your quality of life, and possibly even shorten it. For example, women who are given estrogens to relieve menopause symptoms have a significant risk of breast cancer. Blood pressure medicines can lead to unrecognized fatigue and depression; the same can be seen with sleeping pills, muscle relaxants, and anti-anxiety meds. Unnecessary X-rays or scans can lead to unneeded biopsies, which might result in serious complications. Even yearly PSA screening for prostate cancer can harm more men than it helps. Testosterone supplements can result in dangerously high blood counts. And of course, the money you spend on medications can be substantial, and the extra time you spend going to an office visit cuts into your leisure time and your income--directly impacting your quality of life.
How do you, the patient, break this cycle? First, you have to understand its cause. I'm sure you won't be surprised by my answer, which is "money." The "medical-industrial complex," operates on a fee-for-service business concept, and the way to increase profits is to increase services.
In the not-too-distant past, a person would have one General Practitioner (GP) or Primary Care Physician (PCP) who oversaw his health care. The GP would triage emergencies, treat chronic conditions such as hypertension, anemia or diabetes, diagnose new conditions that need intervention, and, when needed, refer the patient to a specialist for a visit or two. Extremely efficient for the patient, and somewhat time-consuming for the physician who, of course, would be reimbursed for his time. But today, private insurance and the CMS (Center for Medicare and Medicaid), the federal oversight agency, set limits on what can be charged for clinic visits by a GP vs. a specialist, sets costs for procedures, limits the allowable length of a clinic visit, and determines what diagnoses will be covered and what won't. From an economic perspective, this payment system incentivizes multiple short doctor visits to specialists rather than one-stop shopping with a GP. The resultant fragmentation of health care leads to more treatment, more medication, and poor coordination of care (see "The Bystander Effect in Medical Care: Why do I have so many doctors not taking care of me?" May 20, 2013).
The paradigm has shifted from "one patient, one doctor, many diagnoses" to "one patient, many diagnoses, and a doctor for each diagnosis." And with each new doctor comes a new set of medications, and many more return office visits, of which many are done by mid-level providers, that is, nurse practitioners or physician assistants. Mid-level providers tend to perpetuate the status quo; they can speed a patient quickly through a routine clinic visit, but may not have the medical expertise to diagnose new problems, further increasing referrals to specialists. The latest innovation in health care, electronic medical records, further perpetuate medical inertia by including no-brainer "check boxes" for return clinic visits, automatic prescription renewals, and referrals to other specialists in the system.
How can you, the patient, insure that you are getting only the amount of health care you need? It's not a good idea to stop medications on your own, and it can be intimidating to confront your doctor for advice on how to do with less of him! But if you are serious about cutting back on health care, start with the following steps:
1. Be familiar each medicine you are taking--its name, what it does, and what condition it is treating.
2. For each medication, do you still have the condition for which it was prescribed? If not, would the condition return if the medication were stopped? (Examples are hypertension, thyroid disease and diabetes). Was it prescribed for a short course of treatment that is completed, but no one bothered to discontinue the prescription? For example, if you were put on arthritis medication for a bad knee, and you subsequently had a knee replacement, the pain med should have been stopped.
3. Are you taking multiple medications for a single condition when perhaps one might suffice? Sometimes all that is needed are dose adjustments. For example, getting the correct dose of a blood pressure medication might require many re-checks and frequent dose changes, and it is easier for a provider to merely add a second or third pill.
4. Are some of your medications expensive, or have high co-pays? For each class of drug (e.g. antibiotics, sleeping pills, acid-reducers, cholesterol medication) your insurance company has a preferred choice. See if your doctor can switch to that one instead. You might need to ask your pharmacist, or call the insurance company directly, to get their list, and then ask the prescribing doctor if it's appropriate and, if so, to change the prescription (and cancel the other one).
5. How many doctors do you see regularly? In particular, how many specialists are you seeing and how often? Find out what is the purpose of any return visits they schedule, and whether some of this can be done by phone or electronic messaging. Or better yet, can the follow up be done by your PCP? Or has the problem been resolved and you are a victim of the "return to clinic" check box? You may have to make an extra visit to the specialist to get this information and end the relationship.
Once you get this information, here are some steps you can then take:
1. Discontinue as many medications as you can, or switch to acceptable, cheaper alternatives, with your doctor's assistance.
2. Review your personal list of prescribed medications, and compare it to the one in the medical record at your doctor's office. Remove all medications from the list that you are not actively taking, or that have already been discontinued, and make sure this is reflected in the medical record. And by all means, confirm that it is not on auto-renewal at your pharmacy.
3. Cut down the number of doctor's visit, once you have determined which specialists you need to see, and which one don't add anything to your health care.
4. Prioritize and simplify your ongoing medical care. Mid-level practitioners are great for maintenance of existing chronic conditions, but when a condition changes, or there is a new problem, insist on seeing the doctor instead. (Most of my inappropriate referrals come from mid-levels who are trying to solve a problem they don't have the training to solve.)
5. Ask your PCP to interpret and prioritize your visits to specialists, and for the specialist to discuss and coordinate your care with your PCP. If your PCP is not accessible or interested, consider finding another one.
6. Make use of electronic messaging, email, or phone calls when possible, to replace clinic visits.
7. Adopt lifestyle changes suggested by your doctor that might help you avoid taking additional medication, such as weight loss, exercise, smoking cessation, diet modification. If you go through with this, ask for feedback from your doctor, who should be willing to re-evaluate your meds and your health--after all, he suggested it.
Now let's turn the tables and see how difficult this can be for the doctor. When I see someone who is stuck in the web of medical inertia, I may say, "You have too many doctors and are taking too many pills. It's time to cut back on both." I am often met with resistance. Surprisingly, many people prefer to continue on the way they are. They don't want to hear that they don't need all these medications, or that their symptoms are due to depression or anxiety. They would rather take a pill than stop smoking, or lose weight.
For the rest, I do my best to help. I'm reluctant to stop medications started by another doctor; however, I can offer to help review medications and diagnoses. I can contact the doctor and see if the medication is necessary. I'll help to find cheaper alternatives when I can. As a rule, I don't renew medications that I didn't originally prescribe. For patients whose condition I am managing, I'll try to do a lot of my follow up by email or messaging, taking advantage of the electronic record. Every little bit helps.
Cutting back on medical care is a slow process on an individual level, and we physicians are just as frustrated as you are with the excesses in the system. The situation is not going to be improved by more insurance, but by reform of the entire system--which is unlikely to happen in my lifetime unless patients get involved and start demanding a change.
When I brought up this topic with friends, I was amazed to find how many had stories to tell about their personal experience with excessive health care. Do you, too, want to make a change? Please feel free to share your stories here. Maybe we can start to make a difference.
The opinions expressed here are my own, and do not reflect those of my employer, Geisinger Health Systems.
Monday, June 30, 2014
The Road to Bad Science Is Paved with Obedience and Secrecy
by Jalees Rehman
We often laud intellectual diversity of a scientific research group because we hope that the multitude of opinions can help point out flaws and improve the quality of research long before it is finalized and written up as a manuscript. The recent events surrounding the research in one of the world's most famous stem cell research laboratories at Harvard shows us the disastrous effects of suppressing diverse and dissenting opinions.
The infamous "Orlic paper" was a landmark research article published in the prestigious scientific journal Nature in 2001, which showed that stem cells contained in the bone marrow could be converted into functional heart cells. After a heart attack, injections of bone marrow cells reversed much of the heart attack damage by creating new heart cells and restoring heart function. It was called the "Orlic paper" because the first author of the paper was Donald Orlic, but the lead investigator of the study was Piero Anversa, a professor and highly respected scientist at New York Medical College.
Anversa had established himself as one of the world's leading experts on the survival and death of heart muscle cells in the 1980s and 1990s, but with the start of the new millennium, Anversa shifted his laboratory's focus towards the emerging field of stem cell biology and its role in cardiovascular regeneration. The Orlic paper was just one of several highly influential stem cell papers to come out of Anversa's lab at the onset of the new millenium. A 2002 Anversa paper in the New England Journal of Medicine – the world's most highly cited academic journal –investigated the hearts of human organ transplant recipients. This study showed that up to 10% of the cells in the transplanted heart were derived from the recipient's own body. The only conceivable explanation was that after a patient received another person's heart, the recipient's own cells began maintaining the health of the transplanted organ. The Orlic paper had shown the regenerative power of bone marrow cells in mouse hearts, but this new paper now offered the more tantalizing suggestion that even human hearts could be regenerated by circulating stem cells in their blood stream.
A 2003 publication in Cell by the Anversa group described another ground-breaking discovery, identifying a reservoir of stem cells contained within the heart itself. This latest coup de force found that the newly uncovered heart stem cell population resembled the bone marrow stem cells because both groups of cells bore the same stem cell protein called c-kit and both were able to make new heart muscle cells. According to Anversa, c-kit cells extracted from a heart could be re-injected back into a heart after a heart attack and regenerate more than half of the damaged heart!
These Anversa papers revolutionized cardiovascular research. Prior to 2001, most cardiovascular researchers believed that the cell turnover in the adult mammalian heart was minimal because soon after birth, heart cells stopped dividing. Some organs or tissues such as the skin contained stem cells which could divide and continuously give rise to new cells as needed. When skin is scraped during a fall from a bike, it only takes a few days for new skin cells to coat the area of injury and heal the wound. Unfortunately, the heart was not one of those self-regenerating organs. The number of heart cells was thought to be more or less fixed in adults. If heart cells were damaged by a heart attack, then the affected area was replaced by rigid scar tissue, not new heart muscle cells. If the area of damage was large, then the heart's pump function was severely compromised and patients developed the chronic and ultimately fatal disease known as "heart failure".
Anversa's work challenged this dogma by putting forward a bold new theory: the adult heart was highly regenerative, its regeneration was driven by c-kit stem cells, which could be isolated and used to treat injured hearts. All one had to do was harness the regenerative potential of c-kit cells in the bone marrow and the heart, and millions of patients all over the world suffering from heart failure might be cured. Not only did Anversa publish a slew of supportive papers in highly prestigious scientific journals to challenge the dogma of the quiescent heart, he also happened to publish them at a unique time in history which maximized their impact.
In the year 2001, there were few innovative treatments available to treat patients with heart failure. The standard approach was to use medications that would delay the progression of heart failure. But even the best medications could not prevent the gradual decline of heart function. Organ transplants were a cure, but transplantable hearts were rare and only a small fraction of heart failure patients would be fortunate enough to receive a new heart. Hopes for a definitive heart failure cure were buoyed when researchers isolated human embryonic stem cells in 1998. This discovery paved the way for using highly pliable embryonic stem cells to create new heart muscle cells, which might one day be used to restore the heart's pump function without resorting to a heart transplant.
The dreams of using embryonic stem cells to regenerate human hearts were soon squashed when the Bush administration banned the generation of new human embryonic stem cells in 2001, citing ethical concerns. These federal regulations and the lobbying of religious and political groups against human embryonic stem cells were a major blow to research on cardiovascular regeneration. Amidst this looming hiatus in cardiovascular regeneration, Anversa's papers appeared and showed that one could steer clear of the ethical controversies surrounding embryonic stem cells by using an adult patient's own stem cells. The Anversa group re-energized the field of cardiovascular stem cell research and cleared the path for the first human stem cell treatments in heart disease.
Instead of having to wait for the US government to reverse its restrictive policy on human embryonic stem cells, one could now initiate clinical trials with adult stem cells, treating heart attack patients with their own cells and without having to worry about an ethical quagmire. Heart failure might soon become a disease of the past. The excitement at all major national and international cardiovascular conferences was palpable whenever the Anversa group, their collaborators or other scientists working on bone marrow and cardiac stem cells presented their dizzyingly successful results. Anversa received numerous accolades for his discoveries and research grants from the NIH (National Institutes of Health) to further develop his research program. He was so successful that some researchers believed Anversa might receive the Nobel Prize for his iconoclastic work which had redefined the regenerative potential of the heart. Many of the world's top universities were vying to recruit Anversa and his group, and he decided to relocate his research group to Harvard Medical School and Brigham and Women's Hospital 2008.
There were naysayers and skeptics who had resisted the adult stem cell euphoria. Some researchers had spent decades studying the heart and found little to no evidence for regeneration in the adult heart. They were having difficulties reconciling their own results with those of the Anversa group. A number of practicing cardiologists who treated heart failure patients were also skeptical because they did not see the near-miraculous regenerative power of the heart in their patients. One Anversa paper went as far as suggesting that the whole heart would completely regenerate itself roughly every 8-9 years, a claim that was at odds with the clinical experience of practicing cardiologists. Other researchers pointed out serious flaws in the Anversa papers. For example, the 2002 paper on stem cells in human heart transplant patients claimed that the hearts were coated with the recipient's regenerative cells, including cells which contained the stem cell marker Sca-1. Within days of the paper's publication, many researchers were puzzled by this finding because Sca-1 was a marker of mouse and rat cells – not human cells! If Anversa's group was finding rat or mouse proteins in human hearts, it was most likely due to an artifact. And if they had mistakenly found rodent cells in human hearts, so these critics surmised, perhaps other aspects of Anversa's research were similarly flawed or riddled with artifacts.
At national and international meetings, one could observe heated debates between members of the Anversa camp and their critics. The critics then decided to change their tactics. Instead of just debating Anversa and commenting about errors in the Anversa papers, they invested substantial funds and efforts to replicate Anversa's findings. One of the most important and rigorous attempts to assess the validity of the Orlic paper was published in 2004, by the research teams of Chuck Murry and Loren Field. Murry and Field found no evidence of bone marrow cells converting into heart muscle cells. This was a major scientific blow to the burgeoning adult stem cell movement, but even this paper could not deter the bone marrow cell champions.
Despite the fact that the refutation of the Orlic paper was published in 2004, the Orlic paper continues to carry the dubious distinction of being one of the most cited papers in the history of stem cell research. At first, Anversa and his colleagues would shrug off their critics' findings or publish refutations of refutations – but over time, an increasing number of research groups all over the world began to realize that many of the central tenets of Anversa's work could not be replicated and the number of critics and skeptics increased. As the signs of irreplicability and other concerns about Anversa's work mounted, Harvard and Brigham and Women's Hospital were forced to initiate an internal investigation which resulted in the retraction of one Anversa paper and an expression of concern about another major paper. Finally, a research group published a paper in May 2014 using mice in which c-kit cells were genetically labeled so that one could track their fate and found that c-kit cells have a minimal – if any – contribution to the formation of new heart cells: a fraction of a percent!
The skeptics who had doubted Anversa's claims all along may now feel vindicated, but this is not the time to gloat. Instead, the discipline of cardiovascular stem cell biology is now undergoing a process of soul-searching. How was it possible that some of the most widely read and cited papers were based on heavily flawed observations and assumptions? Why did it take more than a decade since the first refutation was published in 2004 for scientists to finally accept that the near-magical regenerative power of the heart turned out to be a pipe dream.
One reason for this lag time is pretty straightforward: It takes a tremendous amount of time to refute papers. Funding to conduct the experiments is difficult to obtain because grant funding agencies are not easily convinced to invest in studies replicating existing research. For a refutation to be accepted by the scientific community, it has to be at least as rigorous as the original, but in practice, refutations are subject to even greater scrutiny. Scientists trying to disprove another group's claim may be asked to develop even better research tools and technologies so that their results can be seen as more definitive than those of the original group. Instead of relying on antibodies to identify c-kit cells, the 2014 refutation developed a transgenic mouse in which all c-kit cells could be genetically traced to yield more definitive results - but developing new models and tools can take years.
The scientific peer review process by external researchers is a central pillar of the quality control process in modern scientific research, but one has to be cognizant of its limitations. Peer review of a scientific manuscript is routinely performed by experts for all the major academic journals which publish original scientific results. However, peer review only involves a "review", i.e. a general evaluation of major strengths and flaws, and peer reviewers do not see the original raw data nor are they provided with the resources to replicate the studies and confirm the veracity of the submitted results. Peer reviewers rely on the honor system, assuming that the scientists are submitting accurate representations of their data and that the data has been thoroughly scrutinized and critiqued by all the involved researchers before it is even submitted to a journal for publication. If peer reviewers were asked to actually wade through all the original data generated by the scientists and even perform confirmatory studies, then the peer review of every single manuscript could take years and one would have to find the money to pay for the replication or confirmation experiments conducted by peer reviewers. Publication of experiments would come to a grinding halt because thousands of manuscripts would be stuck in the purgatory of peer review. Relying on the integrity of the scientists submitting the data and their internal review processes may seem naïve, but it has always been the bedrock of scientific peer review. And it is precisely the internal review process which may have gone awry in the Anversa group.
Just like Pygmalion fell in love with Galatea, researchers fall in love with the hypotheses and theories that they have constructed. To minimize the effects of these personal biases, scientists regularly present their results to colleagues within their own groups at internal lab meetings and seminars or at external institutions and conferences long before they submit their data to a peer-reviewed journal. The preliminary presentations are intended to spark discussions, inviting the audience to challenge the veracity of the hypotheses and the data while the work is still in progress. Sometimes fellow group members are truly skeptical of the results, at other times they take on the devil's advocate role to see if they can find holes in their group's own research. The larger a group, the greater the chance that one will find colleagues within a group with dissenting views. This type of feedback is a necessary internal review process which provides valuable insights that can steer the direction of the research.
Considering the size of the Anversa group – consisting of 20, 30 or even more PhD students, postdoctoral fellows and senior scientists – it is puzzling why the discussions among the group members did not already internally challenge their hypotheses and findings, especially in light of the fact that they knew extramural scientists were having difficulties replicating the work.
Retraction Watch is one of the most widely read scientific watchdogs which tracks scientific misconduct and retractions of published scientific papers. Recently, Retraction Watch published the account of an anonymous whistleblower who had worked as a research fellow in Anversa's group and provided some unprecedented insights into the inner workings of the group, which explain why the internal review process had failed:
"I think that most scientists, perhaps with the exception of the most lucky or most dishonest, have personal experience with failure in science—experiments that are unreproducible, hypotheses that are fundamentally incorrect. Generally, we sigh, we alter hypotheses, we develop new methods, we move on. It is the data that should guide the science.
In the Anversa group, a model with much less intellectual flexibility was applied. The "Hypothesis" was that c-kit (cd117) positive cells in the heart (or bone marrow if you read their earlier studies) were cardiac progenitors that could: 1) repair a scarred heart post-myocardial infarction, and: 2) supply the cells necessary for cardiomyocyte turnover in the normal heart.
This central theme was that which supplied the lab with upwards of $50 million worth of public funding over a decade, a number which would be much higher if one considers collaborating labs that worked on related subjects.
In theory, this hypothesis would be elegant in its simplicity and amenable to testing in current model systems. In practice, all data that did not point to the "truth" of the hypothesis were considered wrong, and experiments which would definitively show if this hypothesis was incorrect were never performed (lineage tracing e.g.)."
Discarding data that might have challenged the central hypothesis appears to have been a central principle.
According to the whistleblower, Anversa's group did not just discard undesirable data, they actually punished group members who would question the group's hypotheses:
"In essence, to Dr. Anversa all investigators who questioned the hypothesis were "morons," a word he used frequently at lab meetings. For one within the group to dare question the central hypothesis, or the methods used to support it, was a quick ticket to dismissal from your position."
The group also created an environment of strict information hierarchy and secrecy which is antithetical to the spirit of science:
"The day to day operation of the lab was conducted under a severe information embargo. The lab had Piero Anversa at the head with group leaders Annarosa Leri, Jan Kajstura and Marcello Rota immediately supervising experimentation. Below that was a group of around 25 instructors, research fellows, graduate students and technicians. Information flowed one way, which was up, and conversation between working groups was generally discouraged and often forbidden.
Raw data left one's hands, went to the immediate superior (one of the three named above) and the next time it was seen would be in a manuscript or grant. What happened to that data in the intervening period is unclear.
A side effect of this information embargo was the limitation of the average worker to determine what was really going on in a research project. It would also effectively limit the ability of an average worker to make allegations regarding specific data/experiments, a requirement for a formal investigation."
This segregation of information is a powerful method to maintain an authoritarian rule and is more typical for terrorist cells or intelligence agencies than for a scientific lab, but it would definitely explain how the Anversa group was able to mass produce numerous irreproducible papers without any major dissent from within the group.
In addition to the secrecy and segregation of information, the group also created an atmosphere of fear to ensure obedience:
"Although individually-tailored stated and unstated threats were present for lab members, the plight of many of us who were international fellows was especially harrowing. Many were technically and educationally underqualified compared to what might be considered average research fellows in the United States. Many also originated in Italy where Dr. Anversa continues to wield considerable influence over biomedical research.
This combination of being undesirable to many other labs should they leave their position due to lack of experience/training, dependent upon employment for U.S. visa status, and under constant threat of career suicide in your home country should you leave, was enough to make many people play along.
Even so, I witnessed several people question the findings during their time in the lab. These people and working groups were subsequently fired or resigned. I would like to note that this lab is not unique in this type of exploitative practice, but that does not make it ethically sound and certainly does not create an environment for creative, collaborative, or honest science."
Foreign researchers are particularly dependent on their employment to maintain their visa status and the prospect of being fired from one's job can be terrifying for anyone.
This is an anonymous account of a whistleblower and as such, it is problematic. The use of anonymous sources in science journalism could open the doors for all sorts of unfounded and malicious accusations, which is why the ethics of using anonymous sources was heavily debated at the recent ScienceOnline conference. But the claims of the whistleblower are not made in a vacuum – they have to be evaluated in the context of known facts. The whistleblower's claim that the Anversa group and their collaborators received more than $50 million to study bone marrow cell and c-kit cell regeneration of the heart can be easily verified at the public NIH grant funding RePORTer website. The whistleblower's claim that many of the Anversa group's findings could not be replicated is also a verifiable fact. It may seem unfair to condemn Anversa and his group for creating an atmosphere of secrecy and obedience which undermined the scientific enterprise, caused torment among trainees and wasted millions of dollars of tax payer money simply based on one whistleblower's account. However, if one looks at the entire picture of the amazing rise and decline of the Anversa group's foray into cardiac regeneration, then the whistleblower's description of the atmosphere of secrecy and hierarchy seems very plausible.
The investigation of Harvard into the Anversa group is not open to the public and therefore it is difficult to know whether the university is primarily investigating scientific errors or whether it is also looking into such claims of egregious scientific misconduct and abuse of scientific trainees. It is unlikely that Anversa's group is the only group that might have engaged in such forms of misconduct. Threatening dissenting junior researchers with a loss of employment or visa status may be far more common than we think. The gravity of the problem requires that the NIH – the major funding agency for biomedical research in the US – should look into the prevalence of such practices in research labs and develop safeguards to prevent the abuse of science and scientists.
Monday, May 12, 2014
When are you past your prime?
by Emrys Westacott
Recently I had a discussion with a couple of old friends–all of us middle-aged guys–about when one's powers start to decline. God only knows why this topic came up, but it seems to have become a hardy perennial of late. My friends argued that in just about all areas, physical and mental, we basically peak in our twenties, and by the time we turn forty we're clearly on the rocky road to decrepitude.
I disagreed. I concede immediately that this is true of most, perhaps all, physical abilities: speed, strength, stamina, agility, hearing, eyesight, the ability to recover from injury, and so on. The decline after forty may be slight and slow, but it's a universal phenomenon. Of course, we can become fitter through exercise and the eschewing of bad habits, but any improvement here is made possible by our being out of shape in the first place.
What about mental abilities? Again, it's pretty obvious that some of these typically decline after forty: memory, processing speed, the ability to think laterally, perhaps. Here too, the decline may be very gradual, but these capacities clearly do not seem to improve in middle age. Still, I think my friends focus too much on certain kinds of ability and generalize too readily from these across the rest of what we do with our minds. More specifically, I suspect they view the cognitive capabilities that figure prominently in and are especially associated with mathematics and science as somehow the core of thinking in general. Because of this, and because these capacities are more abstract and can be exercised before a person has acquired a great deal of experience or knowledge, certain abilities have come to be identified with sharpness as such, and one's performance at tasks involving quick mental agility or analytic problem solving is taken as a measure of one's raw intellectual horsepower.
A belief in pure abiity, disentangled from experiential knowledge, underlies notions like IQ. It has had a rather inglorious history, and it has been used at times to justify a distribution of educational resources favouring those who are already advantaged. Today it continues to interest those who prefer to see any assessments or evaluations expressed quantitatively wherever possible–-a preference that also reflects the current cultural hegemony of science. Yet what matters to us, really, shouldn't be abilities in the abstract--how quickly we can calculate, or how successfully we can recall information—but what we actually do with these or any other abilities we possess. Is there any reason to suppose that we make better use of what we've got before we're forty?
The prevailing view has long been that in the sciences people do their most important, original and creative work early. Einstein reportedly said that "a person who has not made his great contribution to science by the age of thirty will never do so." But he would say that, wouldn't he? After all, he worked out the theory of special relativity when he was twenty-six. But Einstein was perhaps generalizing hastily from his own case. A recent study entitled "Age and Scientific Genius," published by the National Bureau of Economic Research casts doubt on the prevailing view. After reviewing an extensive literature on the topic, the authors conclude:
In contrast to common perceptions, most great scientific contributions are not the product of precocious youngsters but rather come disproportionately in middle age. Moreover, perceptions that some fields, such as physics, feature systematically younger contributions than others do not stand up to empirical scrutiny.
Interestingly, the average age at which scientists produce their most important work is now several years older than it was in the early twentieth century when Einstein, Bohr, Heisenberg and co. were revolutionizing physics. One possible explanation of this is that at that time, because of the great paradigm shifts that had just taken place, young scientists didn't have to spend so much time learning about earlier theories that had been superseded. Today, however, the "burden of knowledge" that has to be assumed before one can expect to make an original contribution is greater.
But my main objection to my friends' claims about cognitive decline is not that they are wrong about the abilities central to scientific thinking, even if they are unduly pessimistic. After all, honesty obliges me to note that the same study of age and scientific genius cited above also makes this observation:
one of the salient features of Nobel Prize winners and great technological innovators over the 20th century is that, while contributions at young ages have become increasingly rare, the rate of decline in innovation potential later in life remains steep.
Sobering stuff if one happens to be, as the French say, d'un certain âge. No, in my view, the strongest objection to the claim that our mental powers peak in our twenties, or even in our thirties, is that in fields like literature, musical composition, and the visual arts, so many masterpieces are produced by people who are well past forty.
Now, as a philosopher I don't usually like to dirty my hands by doing empirical research, but in this case data is undeniably relevant. It's also interesting in its own right. Let's start with the visual arts. Since I don't claim any sort of expertise here, I took a shortcut andused as my representative sample the ten works that Guardian art critic Jonathan Jones considers "the greatest works of art ever." In two cases, the Chauvet cave paintings and the Parthenon sculptures, we can't say how old the artist was. But here are the other eight works, with the age of the artist when the work was completed given in brackets.
· Leonardo da Vinci, The Foetus in the Womb (c 58-61)
· Rembrandt, Self-Portrait with Two Circles (c 59-63)
· Jackson Pollock, One: Num ber 31 (38)
· Velázquez, Las Meninas (c 58)
· Picasso, Guernica (55)
· Michaelangelo (c 44-57)
· Cézanne, Mont Sainte-Victoire (painted 1902-4) (63-65)
Only two of these works were produced by artists under forty. And if Caravaggio and Pollack didn't produce too many more masterpieces after the one's mentioned here it wasn't necessarily due to declining powers: Caravaggio died at thirty-nine, Pollack at forty-four.
How about classical composers? Here, I didn't find a convenient list of "ten greatest compositions ever," so I simply made my own list of ten celebrated works by composers who had lived well beyond forty (which excludes the likes of Mozart, Mendelssohn, Schubert, and Chopin) and would figure high up on anyone's list of "greatest classical composers." The selection isn't random; it's made with a point to prove in mind. But I think it does that rather effectively since there is widespread agreement that the works mentioned are among the greatest produced by the composer in question. Again, the age of the composer when the work was completed is given in brackets.
· Bach, Mass in B (64)
· Handel, Messiah (57)
· Haydn, The Creation (66)
· Beethoven, Ninth Symphony (54)
· Verdi, Otello (74) [pictured]
· Wagner, Götterdämmerung (61)
· Tchaikovsky, Sixth Symphony (53)
· Dvorak, New World Symphony (52)
· Mahler, Das Lied von der Erde (48)
We might note in passing that several of these composers produced acclaimed masterpieces at an even later date (Verdi'sFalstaff, for instance, was completed when he was seventy-nine), and in some cases, the only thing preventing them doing this was that they dropped dead not long after finishing the work mentioned. Tchaikovsky, for instance died nine days after conducting the first performance of his sixth symphony.
Literature tells a similar story. Many writers have produced what is widely regarded as their finest work long past the age of forty. Feeding, as Wittgenstein says we shouldn't, on a diet of one-sided examples, drawn exclusively, I admit, from the Western canon, I offer the following fifteen instances to support my general point. The number in brackets is the age of the author when the work was published or finished.
· Sophocles, Oedipus at Colonus (c. 90)
· Dante, The Divine Comedy (49-53)
· Chaucer, The Canterbury Tales (55)
· Cervantes, Don Quixote Part I (57), Part II (67)
· Defoe, Robinson Crusoe (59)
· Swift, Gulliver's Travels (59)
· Eliot, Daniel Deronda (57)
· Hugo, Les Miserables (60)
· Tolstoy, Anna Karenina (49)
· Dostoyevsky, The Brothers Karamazov (59)
· Hardy, Tess of the D'Urbervilles (51)
· James, The Wings of a Dove (59)
· Wharton, The Age of Innocence (58)
· Morrison, Beloved (56)
One could extend this list pretty much indefinitely, but there is no need to given the status of the works mentioned, many of which represent their creator's most acclaimed artistic achievement. Of course, there are many literary masterpieces written by authors younger than forty, but it is remarkable how often, in such cases, the writer died young, quite possibly with their best works still to come. Jane Austen died at forty-one; Emily Bronte at thirty; Anton Chekhov at forty-four; Franz Kafka at thirty-nine. To be sure, there are some who produce their best work in their twenties or thirties and never produce much of comparable quality afterwards despite a long life. Melville published Moby Dick when he was thirty-two; Wordsworth had written nearly all his best poetry by the time he was forty. But such cases, while not exceptional, are certainly not typical. Anyway, my point is not to deny that great art can be produced by young people; it is to argue that the many great works of art produced by people in middle age and beyond support the idea that some of our important cognitive abilities can continue to grow rather than decline during those years.
On the face of it, I would say the evidence presented here falsifies the thesis that we are cognitively declining once we're past thirty, or even forty. But how might someone who wishes to defend this claim respond? Well, they might argue that after forty all our basic cognitive functions are indeed declining, but we are good at finding ways to compensate for this, rather as a soccer player in his mid-thirties masks his lack of pace with more astute positional awareness. But then the question arises: why not count this sort of ability as an important function that improves as one ages? Or they might argue that what makes the great achievements of the mature years possible is the greater knowledge base—both of skills (know how) and subject matter (know that) which long experience brings. To this one could respond in a similar manner, that making good use of one's experience is another cognitive function that often improves with age. And if that seems a little abstract, even casuistic, one could point to other, more specific abilities that it is plausible to believe can continue to develop in middle age and that help to explain mature achievements like Paradise Lost or The Brothers Karamzov: for instance, the capacity for empathy, objectivity, self-awareness, and a synthetic grasp of complex wholes—all of them elements of what we call wisdom.
Another objection to my argument could be that the geniuses I cite are not representative of humanity in general. Perhaps one of the things that differentiates them from us ordinary mortals is precisely the fact that their cognitive decline kicks in unusually late, which enables them to put their growing wealth of experience to exceptionally good use. Against this idea, though, I would argue that the evidence against a general deterioration of all one's basic faculties could be culled just as well from people working in many fields: sports coaches, politicians, lawyers, musicians, film-makers…..
Finally, anyone who thinks I've been criticizing a straw man can respond appropriately with a cheap ad hominem, pointing out that my thesis is patently self-serving, coming as it does from one who is much closer to sixty than to forty. In response, I would first remind the critic that the so-called straw men in question are good friends of mine and should not be treated so dismissively. And second, I will appeal to the authority of William James, who, in his famous essay "The Will to Believe," affirms that there are circumstances where "the desire for a certain kind of truth . . .brings about that special truth's existence."
Monday, April 28, 2014
Does Literary Fiction Challenge Racial Stereotypes?
by Jalees Rehman
A book is a mirror: if a fool looks in, do not expect an apostle to look out.
Georg Christoph Lichtenberg (1742-1799)
Reading literary fiction can be highly pleasurable, but does it also make you a better person? Conventional wisdom and intuition lead us to believe that reading can indeed improve us. However, as the philosopher Emrys Westacott has recently pointed out in his essay for 3Quarksdaily, we may overestimate the capacity of literary fiction to foster moral improvement. A slew of scientific studies have taken on the task of studying the impact of literary fiction on our emotions and thoughts. Some of the recent research has centered on the question of whether literary fiction can increase empathy. In 2013, Bal and Veltkamp published a paper in the journal PLOS One showing that subjects who read excerpts from literary texts scored higher on an empathy scale than those who had read a nonfiction text. This increase in empathy was predominantly found in the participants who felt "transported" (emotionally and cognitively involved) into the literary narrative. Another 2013 study published in the journal Science by Kidd and Castano suggested that reading literary fiction texts increased the ability to understand and relate to the thoughts and emotions of other humans when compared to reading either non-fiction or popular fiction texts.
Scientific assessments of how fiction affects empathy are fraught with difficulties and critics raise many legitimate questions. Do "empathy scales" used in psychology studies truly capture the psychological phenomenon of "empathy"? How long does the effect of reading literary fiction last and does it translate into meaningful shifts in behavior? How does one select appropriate literary fiction texts and control texts, and conduct such studies in a heterogeneous group of participants who probably have very diverse literary tastes? Kidd and Castano, for example, used an excerpt of The Tiger's Wife by Téa Obreht as a literary fiction text because the book was a finalist for the National Book Award, whereas an excerpt of Gone Girl by Gillian Flynn was used as a ‘popular fiction' text even though it was long-listed for the prestigious Women's Prize for Fiction.
The recent study "Changing Race Boundary Perception by Reading Narrative Fiction" led by the psychology researcher Dan Johnson from Washington and Lee University took a somewhat different approach. Instead of assessing global changes in empathy, Johnson and colleagues focused on a more specific question. Could the reading of a fictional narrative change the perception of racial stereotypes?
Johnson and his colleagues chose an excerpt from the novel "Saffron Dreams" by the Pakistani-American author Shaila Abdullah. In this novel, the protagonist is a recently widowed pregnant Muslim woman Arissa whose husband Faizan was working in the World Trade Center on September 11, 2001 and killed when the building collapsed. The excerpt from the novel provided to the participants in Johnson's research study describes a scene in which Arissa is traveling alone late at night and is attacked by a group of male teenagers. The teenagers mock and threaten her with a knife because of her Muslim head-scarf (hijab), use racial and ethnic slurs as well as make references to the 9/11 attacks. The narrative excerpt does not specifically mention the word Caucasian, but one of the attackers is identified as blond and another one has a swastika tattoo. They do not believe her when she tries to explain that she was also a victim of the 9/11 attacks and instead refer to her as belonging to a "race of murderers".
The researchers used a second text in their experiment, a synopsis of the literary excerpt from Saffron Dreams. This allowed Johnson colleagues to distinguish between the effects of the literary narrative style with its inner monologue and description of emotions versus the effects of the content. Samples of the literary text and the synopsis used by the researchers can be found at the end of this article (scroll down) for those readers who would like to compare their own reactions to the two texts.
The researchers recruited 68 U.S. participants (mean age 36 years, roughly half were female, 81% Caucasian, reporting seven different religious affiliations but none of them were Muslim) and randomly assigned them to the full literary narrative group (33 participants) or the synopsis group (35 participants). After the participants read the texts, they were asked to complete a number of questions about the text and its impact on them. They were also presented with 18 male faces that the researchers had designed with a special software in a manner that they appeared ambiguous in terms of Caucasian or Arab characteristics. For example, the faces combined blue eyes with darker skin tones. The participants were asked to grade the faces as being:
2) mixed, more Arab than Caucasian
3) mixed, more Caucasian than Arab
The participants were also asked to estimate the genetic overlap between Caucasians and Arabs on a scale from 0% to 100%.
Participants in the narrative fiction group were more likely to choose one of the ambiguous options (mixed, more Arab than Caucasian or mixed, more Caucasian than Arab) and less likely to choose the categorical options (Arab or Caucasian) than those who read the synopsis. Even more interesting is the finding that the average percentage of genetic overlap between Caucasians and Arabs estimated by the synopsis group was 33%, whereas it was 57% in the narrative fiction group.
Both of these estimates are way off. The genetic overlap between any one human being and another human being on our planet is approximately 99.9%. Even much of the 0.1% variation in the human genome sequences is not due to 'racial' differences. As pointed out in a Nature Genetics article by Lynn Jorde and Stephen Wooding, approximately 90% of total genetic variation between humans would be present in a collection of individuals from any one continent (Asia, Europe or Africa). Only an additional 10% genetic variation would be found if the collection consisted of a mixture of Europeans, Asians and Africans.
It is surprising that both groups of study participants heavily underestimated the genetic overlap between Arabs and Caucasians, and that simply reading the fictional text changed their views of the human genome. This latter finding is also a red flag that informs us about the poor state of general knowledge of genetics, which appears to be so fragile that views can be swayed by nonscientific literary texts.
This study is the first to systematically test the impact of reading literary fiction on an individual's assessment of race boundaries and genetic similarity. It suggests that fiction can indeed blur the perception of race boundaries and challenge our stereotypes. The text chosen by the researchers is especially well-suited to defy stereotypical views held by the readers. The protagonist's Muslim husband was killed in the 9/11 attacks and she herself is being harassed by non-Muslim thugs. This may challenge assumptions held by some readers that only non-Muslims were the victims of the 9/11 attacks.
The effect of reading the narrative text seemed to have effects on the readers that went far beyond the content matter – the story of a Muslim woman who is showing significant courage while being threatened. The faces shown to the study participants were those of men, and the question of genetic overlap between Caucasians and Arabs was a rather abstract question which had little to do with Arissa's story. Perhaps Arissa's story had a broader effect on the readers. The study did not measure the impact of the narrative on additional stereotypes or assumptions held by the readers such as those regarding other races or sexual orientations, but this is a question that ought to be investigated.
One of the limitations of the study is that it assessed the impact of the story only at a single time-point, immediately after reading the text. Without measuring the effect a few days or weeks later, it is difficult to ascertain whether this was a lasting effect. Another limitation of this study is that it purposefully chose an anti-stereotypical text, but did not test the opposite hypothesis, that some fictional narratives may potentially foster negative stereotypes.
One of my earliest memories of an English-language novel about Muslim characters is the spy novel "The Mahdi" by the British author A.J Quinnell (pen name for Philip Nicholson) written in 1981. The basic plot is that (spoiler alert) US and British intelligence agencies want to manipulate and control the Muslim world by installing a 'Mahdi', the long-awaited spiritual and political leader of Muslims foretold by Muslim tradition. The ridiculous part of the plan is that the puppet leader is accepted by the Muslim world as the true incarnation of the Mahdi because of a green laser beam emanating from a satellite. The beam incinerates a sacrificial animal in front of a crowd of millions of Muslims at the Hajj pilgrimage and convinces them (and the rest of the Muslim world) that God sent this green laser beam as a sign. This novel portrayed Muslims as gullible idiots who would simply accept the divine nature of a green laser beam. One can only wonder what impact reading an excerpt from that novel would have had on the perception of race boundaries by study participants.
The study by Johnson and colleagues is an important contribution to the research of how reading can change our perceptions of race and possibly stereotypes in general. It shows that reading fiction can blur the perception of race boundaries, but it also raises a number of additional questions about how long this effect lasts, how pervasive it is and whether fiction might also have the opposite effect. Hopefully, these questions will be addressed in future research studies.
Image Credit: Saffron Woman by N.M. Rehman (generated from an attribution-free, public domain photograph)
Dan R. Johnson , Brandie L. Huffman & Danny M. Jasper (2014)
Changing Race Boundary Perception by Reading Narrative Fiction, Basic and Applied Social Psychology, 36:1, 83-90, DOI:10.1080/01973533.2013.856791
Excerpt of the literary fiction sample from "Saffron Dreams" by Shaila Abdullah
This is just an excerpt from the narrative sample used by the researchers, which was 3,108 words in length (pages 57-64 from the book):
"I got off the northbound No. 2 IRT and found out almost immediately that I was not alone. The late October evening inside the station felt unusually weighty on my senses.
I heard heavy breathing behind me. Angry, smoky, scared. I could tell there were several of them, probably four. Not pros, perhaps in their teens. They walked closer sometimes, and other times the heavy thud of spiked boots on concrete and clanking chains receded into the distance. They walked like boys wanting to be men. They fell short. Why was there no fear in my heart? Probably because there was no more room in my heart for terror. When horror comes face-to-face with you and causes a loved one's death, fear leaves your heart. In its place, merciful God places pain. Throbbing, pulsating, oozing pus, a wound that stays fresh and raw no matter how carefully you treat it. How can you be afraid when you have no one to be fearful for? The safety of your loved ones is what breeds fear in your heart. They are the weak links in your life. Unraveled from them, you are fearless. You can dangle by a thread, hang from the rooftop, bungee jump, skydive, walk a pole, hold your hand over the flame of a candle. Burnt, scalded, crashed, lost, dead, the only loss would be to your own self. Certain things you are not allowed to say or do. Defiant as I am, I say and do them anyway.
And so I traveled with a purse that I held protectively on one side. My hijab covered my head and body as the cool breeze threatened to unveil me. I laughed inwardly as I realized I was more afraid of losing the veil than of being mugged. The funny part of it is, I desperately wanted to lose my hijab when I came to America, but Faizan had stood in my way. For generations, women in his household had worn the veil, although none of them seemed particularly devout. It's just something that was done, no questions asked, no explanations needed. My argument was that we should try to assimilate into the new culture as much as possible, not stand out. Now that he was gone, losing the hijab meant losing a portion of our time together.
It had been just 41 days. My iddat, bereavement period, was over. Technically I was a free woman, not tied to anyone, but what could I do about the skeletons in my closet that wouldn't leave me alone?"
Excerpt of the Synopsis used by the researchers as a comparator:
This is the corresponding excerpt from the synopsis used by the researchers. The full-length synopsis was 491 words long:
"The scene starts with Arissa getting off the subway train. She is being followed. Most commuters have already returned home, so it is not the safest time to be traveling alone. Four people are walking behind her. Initially confused by the lack of fear in her heart, she realizes that it is the consequence of losing someone so close to her. It is ironic that she is wearing her hijab, a Muslim veil. She wanted to get rid of it when she came to America, but her husband, Faizon, insisted she keep it. Following his death, keeping the hijab was a way of keeping some of their time together. It has been 41 days since the attack, and Arissa's iddat, bereavement period, is over. She is a free woman, but cannot put aside her grave feelings of loss."
Monday, April 21, 2014
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
by Yohan J. John
No one knows exactly how life began, but a pivotal chapter in the story was the formation of the first single-celled organism -- the common ancestor to every living thing on the planet. I like to think of the birth of life as the creation of the first boundary -- the cell membrane. That first cell membrane enclosed a drop of the primordial soup, creating a separation between inside and outside, and between life and non-life. Through this act of individuation the cell could become a controlled environment: a chemical safe zone for the sensitive molecular machinery needed to maintain integrity and facilitate replication. The game of life consists in large part of perpetuating the difference between inside and outside for as long as possible. Death, then, is the dissolution of difference. But the paradox at the heart of life is that the inside cannot survive without the outside. The cell requires raw materials -- nutrients and energy -- to sustain itself and to reproduce, and these must be sought outside the safe zone, in the wild and unpredictable outside world.
The cell membrane has a dichotomous role. It must preserve the cell’s identity as an entity that is distinct from everything outside it, but it must not be an impenetrable wall. It must be a gateway through which the cell can absorb raw material and eject waste, but it cannot allow the inside to become inundated by the outside. It fulfills this challenge by being selectively permeable, carefully overseeing the traffic between the inside and the outside. The cell membrane must also be flexible, because it serves the roles of locomotion and consumption. In a single-celled organism, the cell membrane is therefore a primitive sense organ, a transportation system and a digestive system, all rolled into one.
The birth of life was a moment of cleaving: when the first cell membrane enveloped its drop of primordial ooze, it cleaved the inside from the outside, but it also became the conduit through which the inside could cleave to the outside. Like Janus, the two-faced Roman god of beginnings and endings, of doors and passageways, the cell membrane is a sentry looking in two directions simultaneously. Given its role in cellular transaction, transition and transformation, the cell membrane’s function might even be described as a precursor to intelligence.
The connection between boundaries and intelligence may run quite deep. In multicellular organisms like humans, the skin is the boundary between inside and outside. Skin cells, as it turns out, are related to neurons. During embryonic development, cells in the ectoderm, which is the outermost layer of the embryo, gradually differentiate to become the cells of the skin and the nervous system. (Researchers have recently found ways of turning skin cells into neurons, suggesting that the line between these two kindred cells may be somewhat permeable.) The skin of a multicellular organism is much like the cell membrane of a single cell: it separates inside from outside, providing a physical boundary for the organism. But the inkling of intelligence in that first semipermeable membrane finds its full expression in the nervous system, which patrols a very different sort of boundary: the line between predictable and unpredictable, between known and unknown.
Life is an obstacle course full of things an organism needs or desires, like food and shelter, and things it would prefer to avoid, like predators or foul weather. Maximizing the good while minimizing the bad requires being able to use patterns in the environment to anticipate what is going to happen. Plants must be sensitive to the rhythmic pattern of the seasons. Animals in turn must predict the patterns of plants and other animals. The evolution of the central nervous system -- the brain and the spinal cord -- was a great leap forward in the pattern-recognition capabilities of living things. The ability to recognize and categorize the patterns in nature and use them to survive and thrive is central to intelligence. It allows living things to find (and create) islands of order and stability in a swirling sea of change and uncertainty.
But it’s dangerous to just stay put once you’ve found an island of order. Resources are limited and change is the only constant -- the boundary between the solid ground of reliable knowledge and the encircling sea of unpredictability is in a state of flux. Nature seems to always find a way of casting us out of the gardens of Eden we create or discover . A pattern-seeker must be vigilant, staying on the lookout for unforeseen dangers and new opportunities. This vigilance takes the form of exploration, and even very simple animals do it. Insect colonies have specialized scouts that search for fresh sources of food. Introduce a new object into the cage of a lab rat, and the first thing it does is investigate it thoroughly.
We tend to describe the behavior of animals behavior in purely utilitarian terms. The exploratory behavior of rats, or birds, or bees, is just a combination of foraging for food, looking for mates, and keeping an eye out for predators. When it comes to human culture, however, utilitarianism can often seem like a bit of a stretch. Is it fear or hunger that drives people to investigate the depths of the ocean, or the far reaches of space?
We humans get bored on our islands of order, even though we need them for our survival and sanity. We also like to sail off into the unknown from time to time. What constitutes the unknown varies from person to person -- it’s not just scientists or philosophers that contend with it. Only a fraction of the world’s population has the inclination and the good fortune to experience first hand the outer limits of scientific knowledge, but a far larger number of people can contend with the boundaries of their worldviews in the domains of art and culture. The edge is where the action is -- on the beach where the chaotic sea meets the tranquil beach. But what is it that drives us to the experiential edge in the first place? And does it have anything in common with the forces that drive living things out of their comfort zones in search of sustenance?
The difference between a desire and a drive is that a desire subsides when the goal is reached, whereas a drive is independent of the attainment of the goal -- the act of striving becomes pleasurable in itself. Living beings have a variety of desires that can be temporarily satiated, but the lust for life is a drive, not a desire. In the long run life appears to revel in the very attempt to perpetuate itself. Intelligent beings, meanwhile, seem to revel in the attempt to expand their islands of order, fighting back the lapping waves of the unknown.
We have a name for the drive towards the unknown -- it’s called curiosity. Jürgen Schmidhuber, an artificial intelligence researcher, has a theory of “computational aesthetics” that offers us a vivid mathematical analogy for curiosity. The theory can be summed up in one bold assertion: that interestingness is the “first derivative” of beauty. Readers who detect a whiff of scientific imperialism will hopefully bear with me as I unpack this idea, which need not be taken as anything more that playful speculation. I admit, colloquial and intuitive concepts like “beauty” or “interestingness” often get bent out of shape a bit when scientists examine them, but this is not necessarily a bad thing. Sometimes we need to distance ourselves from our intuitions to discern their outlines more clearly.
According to Schmidhuber’s computational theory of aesthetics, the subjective beauty of a thing is defined as the minimum number of bits required to describe it. Since descriptions vary from person to person, beauty is in the eye of the beholder. A definition of beauty based on bits of information is not in itself particularly alluring, but it can be improved if we see it as an attempt to capture subjective simplicity or elegance. It is perhaps unsurprising that a scientist’s definition of beauty has much in common with Occam’s Razor. 
However, beauty is not necessarily interesting. We also seek the shock of the new, the excitement of the unusual. So Schmidhuber goes on to define interestingness as the rate of change of beauty -- the time-derivative of the subjective description length. A derivative measures the rate of change of one thing with respect to something else. The time-derivative of distance is speed (the rate at which your distance from some point changes), and the time-derivative of speed is acceleration (the rate at which your speed changes). For something to be interesting then, the observer’s ability to describe it must change with time. So interestingness is a dynamic quality, whereas a thing can be beautiful even if it never changes.
Some examples will help us understand what this means. Most people will agree that staring at a blank screen is quite a boring experience. A blank screen is extremely simple from an information-theoretic perspective, and so its description length will be very short. The description might be something like “Every pixel is black”. There is clearly a pattern, but it’s trivially simple. The information on a blank screen can be easily compressed. White noise sits at the other extreme. Somewhat counter-intuitively, information theory tells us that random noise is rich in information, so it’s description length is extremely long. Totally random information cannot be compressed. An accurate description of white noise on a screen would require specifying what is happening in each and every pixel. If a pattern is something that has structure and internal coherence, then randomness is the absence of pattern. Most people find random white noise boring too. What people find interesting lies somewhere in the middle -- between what is too easily compressed, like a blank screen, and what is totally incompressible, like white noise. We like patterns that are simple, but not too simple; complex, but not incomprehensibly so.
Schmidhuber’s theory is couched in the language of computer science and artificial intelligence, which is why the concept of data compression plays such a prominent role. We don’t really know if the brains of humans and animals compress experience in the same sense that a computer algorithm does. But we do know that living things use pattern-recognition to make useful predictions about their environments. We compare the patterns we’ve encountered in the past with our present experience, and try to anticipate the future. We categorize the patterns we encounter -- poisonous or edible, sweet or bitter, friend or foe -- so that if we encounter them again, we know how to react. Rather than compressibility per se, perhaps what we find interesting is the possibility of enhancing our categories so they encompass more of our experiences. Knowledge consists of having comprehensive categories for as many experiences as possible, and knowing how to respond to each category.
What might interestingness look like? Let me describe a toy system that is confronted by something unexpected, and shows a spurt of interest. Let’s say we have a system that is experiencing something beautiful. The subjective beauty “B” can change over time. In the diagram above, beauty is the blue line, and it stays boringly constant for a while, but at the halfway point it suddenly changes. Imagine a pleasant but predictable movie that suddenly becomes unpredictable in the middle. The beauty increases! The system has an expectation “E” which in our toy system is a memory of the past value of B. The red line in the diagram is the expectation. The green line represents the interest level “I”, which depends on the difference between the beauty and the expectation. When expectation and reality don’t line up, the value of E is different from B, so the system’s interest level shoots up. But eventually E gets accustomed to the new value of B, and the interest level goes back to zero. If the system had perfect expectations and could perfectly predict the change to the value of B, then there would be no increase in the interest level. A curious system is addicted to these bursts of interest, and actively seeks them out. 
As it turns out, the brain’s dopamine neurons fire in bursts of this sort when something unexpectedly good happens. Researchers call this a “reward prediction error” signal, and it is one of the reasons many people think of dopamine as the “pleasure chemical”. But this misses a subtlety -- if the pleasure is completely predictable, the dopamine cells don’t fire. This dopamine cell pattern is more of a novelty signal than a pleasure signal. (There seem to be several other things that dopamine does, so even calling it a novelty chemical is an oversimplification.) Neural network theorists often employ the dopamine burst as a “reinforcement signal” that allows a network to learn from experience and improve its ability to categorize and predict. 
As we simplify, expand and refine our categories we push forward the boundary between what we understand and what we still don’t quite have a handle on. We expand our islands of order, reclaiming land from the sea of unpredictability. Many of the categories humans obsess about have little or nothing to do with the struggle to survive. Curiosity pushes us to proliferate our aesthetic categories -- and in extreme cases it leads to the infinitessimal parcellations of genre and sub-genre that the internet so effectively reveals and encourages. (I invite the reader who does not know what I am talking about to examine the various sub-genres of heavy metal music.)
Curiosity is the drive towards interestingness, and it brings us to the boundaries of what we understand. A trip to a modern art museum should adequately establish that we don’t just find any baffling experience interesting. We seek experiences that are in the sweet spot -- not totally predictable and monotonous, but not random and formless either. During an interesting experience we don’t know exactly what is going on, but we get the feeling that meaningful resolution is but a few moments away. So a Hollywood blockbuster that is too formulaic and predictable is not very interesting, but an experimental art film with no formula at all can bore us to tears too. We like movies with a few twists -- but in order to recognize them as twists we have to have some expectation of what normally happens. A really interesting movie flirts with the boundary between what we know well enough to anticipate, and what surprises and confounds us.
So how does curiosity help us “compress” or improve our categories? Think of the concept of genre. In order to get a subjective sense of what a genre is, you need to experience many examples. Curiosity is what draws you towards this experience. Even if you go to Wikipedia or tvtropes.com and read up on the conventions of a given genre, you still need first-hand experience to understand how those conventions manifest themselves. You need to listen to several blues songs before you can be sure you know what the basic blueprint is. And the more you listen, the more musical structure you can perceive and predict. Once you understand the conventions -- once you know what to expect -- you can experience a burst of interestingness when someone subverts those conventions and confounds your expectation. A blues aficionado is well placed to appreciate the way a band like Led Zeppelin reinterprets the genre’s conventions. In the experience of such aesthetic subversion, you are once again confronted by what is strange and unpredictable, and the curiosity engine becomes fired up once more.
What drives people to police their subjective aesthetic boundaries so zealously? What makes people so concerned with questions of authenticity or originality in art and music? I think going back to the cell membrane might give us some ways to think about such questions. The cell membrane separates inside from outside, mediating interactions between the two. In maintaining a chemical difference between the inside and the outside, it preserves the identity of the cell as an entity that is distinct from the environment. Perhaps aesthetic boundaries -- and mental boundaries more generally -- are central to our notions of identity. To carve out a distinct identity is to maintain a difference between an in-group (which could be just one person) and an out-group. Just as the cell membrane defines the contours of the cell, artistic and intellectual boundaries may define the contours of a personality, or of a community. For people whose identities are wrapped up in difference, to merge with the mainstream might seem a kind of cultural death: a dissolution of the boundary that sustains individuality and identity.
Staying on the boundaries of what is familiar in order to find sweet spots of interestingness allows us to expand our experiential horizons and reaffirm our existences as distinct individuals. But this can also be quite a tiring experience. What is true for a cell is true for an individual, and perhaps even for a culture -- maintaining a boundary takes energy! Most of us aren’t critics -- we can’t spend all our time refining our categories of experience, or sustaining idiosyncratic differences of taste and opinion. Sometimes we need to return to our comfort zones and replenish our supplies. Visiting a museum, for instance, is an experience that can be simultaneously interesting and mind-numbing. (In this age of endless online novelty, I can’t be the only one who seeks out tried and tested experiences -- comfort food, old familiar songs, trashy television -- as an antidote to too much interestingness!) Perhaps merging with the mainstream from time to time is not such a bad thing.
Individualism is taken as a self-evident virtue in modern liberal societies. But given all the effort involved in maintaining the boundary between inside and outside, between the Self and the Other, the opposite movement can be an act of liberation: dissolving the Self by forgoing, for a time, the maintenance of difference. Consider those moments during a sporting event (like a Wave) or a musical gathering (like a Rave) when everyone is moving in unison. It seems as if there is a kind of ecstasy in this voluntary surrender of individuality and difference.
Aesthetic experience, then, is a twofold process. On the one hand, it leads us to curiosity and wonder, which draw us away from our islands of certainty, transforming the contours of our selves. On the other hand, it offers us dissolution and union, which pull us back from the margins, towards community and commonality. Perhaps the dance of aesthetic experience is a microcosm of the great dance of life -- a dance that began with the undulations of that first cell membrane. We sway in the direction of the unknown, and then drift back to the comfort of the known.
Notes and References
 The Genesis story of the fall from grace tells of how man and woman were cast out from the Garden of Eden. In The Power of Myth, Joseph Campbell interprets the story as follows: “Whenever one moves out of the transcendent, one comes into a field of opposites. One has eaten of the tree of knowledge, not only of good and evil, but of male and female, of right and wrong, of this and that, and of light and dark.” Campbell’s “field of opposites” is where pattern-recognition and categorization happen -- it is the field of boundaries and differences, and also of self-consciousness. And this field is no paradise, because it is constantly threatened by the unfamiliar and the unpredictable.
 Jürgen Schmidhuber summarises his theory of aesthetics in a paper entitled “Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes”.
 The diagram shows the results of a little simulation I coded up in Python. It’s a rudimentary “differentiator” that compares the present reality (B) with the recent past (E), and constantly updates its expectations (E). The burst of interest (I) happens during the transient period when reality exceeds expectation (when B > E). Many simple models of dopamine cells use a similar principle. Similar mechanisms can also be employed for edge-detection in a visual image, a crucial stage in object recognition. The system I demonstrate is pretty rudimentary -- it just expects the present to resemble the recent past. You could say that a major goal of artificial intelligence and computational neuroscience is to create systems that have refined, flexible expectations with which to anticipate reality.
 Perhaps the hype cycle represents a burst of curiosity at the societal level. And perhaps social media frenzies are the dopamine bursts of the internet’s hive mind?
My Genome Report Card
by Carol A. Westbrook
Less than 100,000 people in the entire world have had their genome sequenced. I am now one of them. As I wrote in 3QuarksDaily in December, I went into this with some trepidation--you never know what bad news lurks in your genome! I promised to give a report of my results, and here it is.
To get my genome sequenced, I enrolled in Illumina's "Understand Your Genome" Program. Illumina is one of the few companies licensed by the FDA to perform whole genome sequencing (WGS) for medical diagnosis--other consumer products such as Ancestry.com, National Geographic's Geno 2.0, and 23andMe, provide only a limited analysis. I sent in a blood sample in November, and in February received a detailed analysis by Illumina's genetic counselors. In March I attended the "Understand Your Genome," conference, where I received an iPad with my WGS uploaded into the "MyGenome" app, training on the use of the app, and a fascinating daylong seminar which explored the interpretation and medical uses of genome sequences. My daughter, a medical student, attended the program with me.
Viewed on the iPad, my genome sequence consists of two similar but not identical, parallel lines of the letters, one from each chromosome. There are only 4 letters, A,C,G, and T, representing the four DNA nucleotides that are aligned to make the sequence. A human sequence is about 6 billion nucleotides long, with half inherited from one parent and half from the other, and a few new mutations that arose on their own, probably less than 100. Thus, from a family perspective, a person's DNA sequence is 50% identical to each of his parents, children or siblings, 25% identical to grandparents, grandchildren, and so on to my distant relatives. My genome is very similar to every other person's, but it is not identical to anyone's. No one has ever had the same DNA as me, and never will -- it is what makes me uniquely me.
How different am I from everyone else? My genetic analysis showed that I have 3,524,186 individual nucleotide differences, from the "average" genome to which it was compared, reference genome hg19, NCBI build 37. This is about 0.05% variation, which is typical for most people. To put this in perspective, if you were to compare my DNA to that of our two most closely-related primate species, bonobos and chimpanzees, the differences would be over 4%; when comparing me to Neanderthal man, however, you would find only 0.3% variation. So 0.05% is small enough to make me human, but large enough to make me a unique individual.
Of the 3 million variants in my genome, only about 13,000 were found that produce change in the protein coding sequences of genes, impacting on 1,222 "conditions" (diseases or traits). The great majority of these changes were considered to be "benign," meaning they been validated not to cause disease, or they were "variants of unknown significance, " or VUS. A VUS have not been linked to disease, but it has not been excluded, either; many of these VUS's will become clear as more genomes are sequenced and the database expands. We are not sure what to make of the other 3,511,186 variants that occur outside of genes--some may be significant but most are probably silent passengers that were picked up during evolution. Again, we'll learn more as the database expands.
Of the1,222 conditions for which I have variants, only 4 are significant. Three are genes for recessive diseases, which makes me only a carrier, since you need two copies to have a recessive disease. Two these genes, galactosemia and Bardet-Biedl syndrome, are very rare debilitating diseases of children. My own children have a 50% risk of being carriers, though it is very unlikely that their partners are carriers too, so there is little risk that their future children will have the disease. They could be tested prior to having my grandchildren. The third recessive gene is hemochromatosis, a disease of iron overload, which is easily treated in its early, silent stages, but can cause liver cirrhosis if it is not. The hemochromatosis gene is quite common, as one in 200 people of European background are carriers. In fact, it is possible that some of my relatives may actually have the disease; fortunately for them, hemochromatosis is easily diagnosed with a blood test for ferritin, or an inexpensive DNA test.
My surprising result was that I have both recessive genes for TPMT deficiency which, strictly speaking, is not a disease but is a metabolic variation in drug metabolism. A deficiency in TPMT or "thiopurine S-methyltransferase" makes me unable to metabolize three medications: 6-mercaptopurine, 6-thioguanine, and azathioprine. If I take one of these medications I would get deathly ill; fortunately, these are drugs only used for leukemia treatment or transplant. I will keep this in mind should I ever need them. About 0.3% of the population also has TPMT deficiency.
Now on to the diseases which develop later in life, what I call the "AARP diseases." Many participants opted out of learning whether they have one of these scary genes, but I had already decided that I wanted everything revealed. For cancer risk, I was pleased to find that I don't carry any of the known genes. I was also relieved to find that I don't carry any of the known genes for neurologic conditions, in particular the genes for Parkinson's disease, which affected my late mother when she was in her 80's. I also do not carry the genes for early-onset Alzheimer's dementia. Illumina does not analyze for late-onset Alzheimer's dementia, which is the more common form that attacks older adults, though we were given the coordinates if we wanted to check on our own. To do this I used the MyGenome app and punched in the WGS location. I found that I have one copy of APOE-4 -- increased risk -- and one copy of APOE-2-- protective. My risk, then, is neutral. Whew! Looks like I lucked out in the AARP diseases.
That, in a nutshell, is my genome report. Was it valuable? Absolutely. The value to me was not in learning what I have, but what I don't have. I was reassured that I am reasonably healthy, and likely to be so for a few more years. I don't have an increased cancer risk, I don't have a tendency to blood clots. Except for TPMT deficiency I don't have any drug metabolic variants, which means my risk of unexpected side effects from medication is low. My health care costs are likely to remain lower than average and I will probably go on being healthy for a long time. These conclusions will influence both my health insurance choices and my financial planning for retirement.
You can begin to see the impact that WGS might have on your own health, as well as on your health care costs. Today there are only two medical uses for WGS that are accepted and reimbursed by insurance: the identification of unknown diseases of children, and cancer genome analysis for chemotherapy targets--and the cancer use is still not widely accepted. But there are many more ways we could improve medical care with WGS. Imagine the complications and deaths that would be avoided, and the wasted health dollars that would be saved, if your pharmacy had a list of your drug metabolism variants so they could identify--in advance--if you are likely to have serious side effects, or if a particular drug won't be effective for you. We could actually do this today! And if a person knew in advance he had a tendency to some diseases and not others, he could focus his health care dollars on screening and prevention strategies where they will have the most impact. This will be even more relevant as our knowledge base expands.
I cannot recommend WGS to everyone -- yet--but it's in our future, especially as the price is expected to drop below the $1000 mark, less than the cost of a single CAT scan. At present, too few genomes have been sequenced and correlated with medical information to be able to interpret much of what is present in a WGS. This will change over the next few years. There are projects throughout the globe that are doing just this, such as the 100,000 Genomes Project in the UK and The Million Human Genomes Project in China. In the US, the Personal Genome Project is collecting sequences such as mine to do these studies. The potential impact of WGS technology is enormous, as it will lead to more effective, personalized treatment of disease and, more importantly, to better health.
At some time in the not-too-distant future, everyone will have his or her own WGS. I'm pleased to be an early adopter.
Monday, March 31, 2014
Sharing Our Sorrow Via Facebook
by Jalees Rehman
Geteiltes Leid ist halbes Leid ("Shared sorrow is half the sorrow") is a popular German proverb which refers to the importance of sharing bad news and troubling experiences with others. The therapeutic process of sharing takes on many different forms: we may take comfort in the fact that others have experienced similar forms of sorrow, we are often reassured by the empathy and encouragement we receive from friends, and even the mere process of narrating the details of what is troubling us can be beneficial. Finding an attentive audience that is willing to listen to our troubles is not always easy. In a highly mobile, globalized world, some of our best friends may be located thousands of kilometers away, unable to meet face-to-face. The omnipresence of social media networks may provide a solution. We are now able to stay in touch with hundreds of friends and family members, and commiserate with them. But are people as receptive to sorrow shared via Facebook as they are in face-to-face contacts?
A team of researchers headed by Dr. Andrew High at the University of Iowa recently investigated this question and published their findings in the article "Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook". The researchers created three distinct Facebook profiles of a fictitious person named Sara Thomas who had just experienced a break-up. The three profiles were identical in all respects except for how much information was conveyed about the recent (fictitious) break-up. In their article, High and colleagues use the expression "emotional bandwidth" to describe the extent of emotions conveyed in the Facebook profile.
In the low bandwidth scenario, the profile contained the following status update:
"sad and depressed:("
The medium bandwidth profile included a change in relationship status to "single" in the timeline, in addition to the low bandwidth profile update "sad and depressed:(".
Finally, the high emotional bandwidth profile not only contained the updates of the low and medium bandwidth profiles, but also included a picture of a crying woman (the other two profiles had no photo, just the standard Facebook shadow image).
The researchers then surveyed 84 undergraduate students (enrolled in communications courses, average age 20, 53% female) and presented them with screenshots of one of the three profiles.
They asked the students to imagine that the person in the profile was a member of their Facebook network. After reviewing the assigned profile, each student completed a questionnaire asking about their willingness to provide support for Sara Thomas using a 9-point scale (1 = strongly disagree; 9 = strongly agree). The survey contained questions that evaluated the willingness to provide emotional support (e.g. "Express sorrow or regret for her situation") and network support (e.g. "Connect her with people whom she may turn to for help''). In addition to being queried about their willingness to provide distinct forms of support, the students were also asked about their sense of community engendered by Facebook (e.g., "Facebook makes me feel I am a part of a community'') and their preference for online interactions over face-to-face interactions (e.g., "I prefer communicating with other people online rather than face-to-face'').
High and colleagues hypothesized that the high emotional bandwidth profiles would elicit greater support from the students. In face-to-face interactions, it is quite common for us to provide greater support to a person – friend or stranger – if we see them overtly crying and therefore the researchers' hypothesis was quite reasonable. To their surprise, the researchers found the opposite. The willingness to provide emotional or network support was significantly lower among students who viewed the high emotional bandwidth profile! For example, average emotional support scores were 7.8 among students who saw Sara entering the "sad and depressed:(" update (low bandwidth) but the scores were only 6.5 among students who also saw the image of Sara crying and updating her relationship status to single (high bandwidth). Interestingly, students who preferred online interactions over face-to-face interactions or those who felt that Facebook created a strong sense of community responded positively to the high bandwidth profile.
There are some important limitations of the study. The students were asked to evaluate whether they would provide support to a fictitious person by imagining that she was part of their Facebook friends network. This is a rather artificial situation because actual supportive Facebook interactions occur among people who know each other. It is not easy to envision support for a fictitious person whose profile one sees for the first time. Furthermore, "emotional bandwidth" is a broad concept and it is difficult to draw general conclusions about "emotional bandwidth" from the limited differences between the three profiles. Increasing the sample size of the study subjects as well as creating a broader continuum of emotional bandwidth differences (e.g. including profiles which include pictures of a fictitious Sara Thomas who is not crying, using other status updates, etc.), and also considering scenarios that are not just related to break-ups (e.g. creating profiles of a fictitious grieving person who has lost a loved one) would be useful for an in-depth analysis of "emotional bandwidth".
The study by High and colleagues is an intriguing and important foray into the cyberpsychology of emotional self-disclosure and supportive communication on Facebook. This study raises important questions about how cyberbehavior differs from real world face-to-face behavior, and the even more interesting question of why these behaviors are different. Online interactions omit the dynamic gestures, nuanced intonations and other cues which play a critical role in determining our face-to-face behavior. When we share emotions via Facebook, our communication partners are often spatially and temporally displaced. This allows us to carefully "edit" what we disclose about ourselves, but it also allows our audience to edit their responses, unlike the comparatively spontaneous responses of a person sitting next to us. Facebook invites us to use the "Share" button, but we need to remember that online "sharing" is a sharing between heavily edited and crafted selves that is very different from traditional forms of "sharing".
Acknowledgments: The images from the study profiles were provided by Dr. Andrew High, copyright of the images - Dr. Andrew High.
Reference: Misery rarely gets company: The influence of emotional bandwidth on supportive communication on Facebook, AC High, A Oeldorf-Hirsch, S Bellur, Computers in Human Behavior (2014) 34, 79-88