Monday, October 03, 2016
Gaston Bachelard’s New Scientific Spirit
by Aasem Bakhshi
Of all the critiques of Descartes (d.1650), Bachelard’s stands out, as he has selected those principles of Cartesian method which were passed on in silence by other critics, presumably for their seeming innocence. With most of the detractors of the father of modern philosophy, it has either been the principle of universal doubt, the alienated and privileged ego, some step in the logic of the Meditations, some substantive philosophical or scientific doctrine, or the very quest for foundations. For Gaston Bachelard (d. 1962), on the other hand, it was the reductive nature of Cartesian method and resulting epistemology which rendered his philosophy “too narrow to accommodate the phenomena of physics.” (New Scientific Spirit, p. 138) In more particular terms, Bachelard attacks the following rule which according to Descartes summarized his whole method:
The whole method consists entirely in ordering and arranging of the objects on which we must concentrate our eye if we are to discover some truth. We shall be following this method exactly if we first reduce complicated and obscure propositions step by step to simpler ones and then starting with the intuition of the simplest ones of all, try to ascend through the same steps to a knowledge of all the rest.” (Descartes, Rules for the Direction of Mind, Rule 5).
Bachelard objects to the reductive nature of Cartesian method and complains that it fails to regain the unified and synthetic reality once analyzed under the demands of method. It seems that Bachelard here has a point in view of the fact that it was this analytical tendency which lends Descartes the unbridgeable Dualism of Mind and Body. On the Cartesian advice to reduce the complicated to the simple, Bachelard accuses Descartes of having neglected the reality of complexity and neglecting that there are certain qualities which only emerge in the wholes and are not there in the parts.
Even some qualities of the parts or simple realities are not noticeable unless one first understand the complex ones. (Ibid. p. 142) This is illustrated with reference to the fact that the doubling of lines in Hydrogen atomic spectrum would not have been noticed if we had not understood the spectra of Alkaline metals first, while it had been presumed, on Cartesian lines, that the latter complex phenomena are to be understood after the pattern of hydrogen model. (Ibid. pp. 148) Taking a step even higher, Bachelard claims that “there are no simple phenomena; every phenomenon is a fabric of relations. There is no such thing as a simple nature, a simple substance; a substance is a web of attributes.” (Ibid. pp. 147-148) Thus, “no idea can be understood until it has been incorporated into a complex system of thoughts and experiences.” (Ibid.) This attack on even the existence of simple natures once again manifests Bachelard’s desire to criticize nothing less than what is essential to Cartesian method.
The concept of “Simple natures” was introduced by Descartes in his explanation for Rule 6 which according to Descartes, contained the whole secret of his method and the most valuable insight of his treaties (i.e. Regulae). Here Descartes says: “I call ‘absolute’ whatever has within it the pure and simple nature in question” and in Rule 12 further explains the nature of simplicity of simple natures by saying: we term ‘simple’ only those things which we know so clearly and distinctly that they cannot be divided by the mind into others which are more distinctly known.” Moreover, these simple natures are directly intuited by the intellect and are thus self-evident.
This Cartesian notion of intuition is subjected to critique by Bachelard which is comparable to that made by Charles S. Peirce (d. 1914) according to whom “ we have no power of intuition but every cognition is determined logically by previous cognitions.” (“Some consequences of four incapacities” Philosophical Writings, ed. Justus Buchler p.230) Although Bachelard is not that loud in the denial of very possibility of intuition, the conditions he imposes upon it end up at the same destination: “ Intuitive ideas are made clear in a discursive manner, by progressive illumination, by illustration in a series of examples that bringone or another notion into clearer focus.” Thus according to Bachlardian philosophy of science, science does not develop by accumulation and this implication makes Bachelard one of the heralds of contemporary trend in history and philosophy of science started by Thomas Kuhn.[i] He has quoted Dupreel with approval that “ Once an axiom is posited, a second act is always necessary to establish its application.” (ibid. p. 144) Our initial intuition is completed by clarification through induction and synthesis. Furthermore immediacy, the basic ingredient of the concept of intuition is denied in a manner which brings Bachelard very close to Peirce: “Intuition is no longer direct and prior to understanding; rather it is preceded by extended study.” Two more points in connection with intuition are the following. 1) we are warned against ‘positivism in the first sight’ that is assuming that the most apparent features of something are its most characteristic features. 2) the counter intuitive nature of modern science: “nothing can be more anti-Cartesian then the slow change that has been brought in our thinking by the progress of empirical science, which has revealed a wealth of information never suspected in our first intuition.” (Ibid. p. 142)
This second point draws upon the nature of modern science which tends to augment the notion of mathematical intuition with empirical intuition, if not completely replace it. Pointing towards the works of Poncelet, Chasles, Laguerre and Poincare, Bachelard argues that modern scientific spirit, through 'mathematization' of the problem, emphasizes more on discovery rather than solution, Thus what we are experiencing is an end of Cartesian thought in mathematics: “the way to rationalize the world is to complete it”. Mathematics, as Bachelard notes, has moved beyond the order of measure (as in geometry, algebra and arithmetic in Cartesian age) to a tool for progressive scientific objectification. A metaphysician, therefore, brooding over the nature of reality through primarily subjective means is now transformed into a mathematician who is actively indulged in designing controlled experiments in his laboratory. Knowing well that he is confronted with a complex reality, he proceeds by mathematically modelling the phenomenon in the light of available empirical knowledge. He may choose to move from simple models — what might have been comparable to simple natures in order not to rebel from Cartesian spirit — which are only as simple as the choice of keeping some inherent parameters constants for designing more realistic experiments, or for some specific objectives to examine partial reality. Thus, it’s a spiral involving progressive experimentation, models fitting the data, more data arriving from experimentation, and mathematically intensive fresh models best fitting these new datasets. In this sense, modern scientific belief is in discovering the trends which best depict the reality, rather than the reality itself. This is a completely novel spirit, which Bachelard terms as 'progressive objectification'.
In order to illustrate “Cartesian partiality in favor of subjective experience” Bachelard discusses the famous wax example given by Descartes and shows what anti-Cartesian implication can be of using latest experimental techniques on wax. For Descartes the ball of wax was, says Bachelard, a symbol of the fleeting character of material properties.” After describing in detail how a modern physicist would conduct an experiment with the piece of wax using careful purification techniques, controlling the rate of melting and solidification by using an electric oven and even exposing the surface of the wax, he makes the following claim: “what is fleeting is not, as Descartes thought, the properties of the wax but the haphazard circumstances surrounding his observation of it.” (Ibid. 170) It is difficult to disagree with Bachelard’s conclusion from all this discussion that “scientific work is essentially complex,” (p. 171) and that science “rather than rely on whatever clear truths happen to lie ready at hand, actively seek its complex truths by artificial means.” What is unclear is the fact that Descartes would have been impressed with all these details of new technological development and we can imagine him retorting that what is new is not the nature of things but is only a matter of degrees: he himself has pointed out the fact that the extension of the piece of wax “increases if the wax melts, increases again if it boils and greater still if heat is increased” (Second Meditation, Philosophical Works, vo. II. p. 21). What difference does it make from the point of view of Descartes if nowadays one can “regulate the temperature by adjusting the supply of power” or “precisely controlling the shape and surface composition of a wax droplet”? The whole point of the wax example was to problematize shape, surface and other empirically knowable qualities in order to show that these cannot represent reality and to argue for the existence of substance “which is grasped solely by the faculty of judgement which is in my mind.” (Descartes. Ibid.) Bachelard is right that modern scientific and experimental techniques do give some order to the conditions of observation which are confused as given by nature, but the question Descartes was raising through the wax example was not a scientific question but a philosophical one: can we identify the wax-in-itself with the observable qualities? This question might be rejected as absurd or answered in a different way than Descartes[ii] but we fail to see any important implication of the new technological developments for Cartesian question regarding the mutability of qualities and existence of an immutable substance knowable only by the mind. In fact Descartes has mentioned in passing another example for his purpose as well: “… if I look out of the window and see men crossing the square… I normally say that I see the men themselves, just as I say that I see wax. Yet do I see any more than hats and coats which could conceal automatons. I judge that they are men.” (Ibid.) Has experimental science shown that qualities do not change or it has simply gained more control over the process of their change? It could have been logically relevant to the Cartesian argument only if it had done the former, which it is not clear that it has.
[i] Kuhn himself says , “I did read some Bachelard. But it was so close to my own thought that I did not feel I had to read lots and lots more.” “Paradigms of Scientific Evolutions: Thomas S. Kuhn” in The American Philosopher: Conversations, ed. Giovanna Borradori, (Chicago, 1994), p. 160.
[ii] One example of this is Pierre Gassendi who took Descartes to task on this issue: “ I am amazed at how you can say that once forms have been stripped off like clothes, you perceive more perfectly and evidently what the wax is. (Meditations, Fifth Set of Objections, pp. 190-191)
- Gaston Bachelard, The New Scientific Spirit (Beacon Press, 1984).
- Rene Descartes, Rules for the Direction of Mind; Discourse on Method; Meditations on First Philosophy in The Philosophical Writings of Descartes Volumes 1& 2, eds. Cottingham, Stoothoff and Murdoch(Cambridge University Press, 1984).
- Charles Sanders Peirce, “Some Consequences of Four Incapacities,” in Philosophical Writings of Peirce, ed. Justus Buchler (Dover Publications, 1965), pp.228-251.
For Further Reading.
- Mary Tiles, Bachelard: Science and Objectivity (Cambridge University Press, 1985)
- Mary Tiles, “Technology, Science and Inexact Knowledge,” in Continental Philosophy of Science ed. Gary Gutting (Blackwell, 2005), pp. 157-176.
Monday, August 08, 2016
Better Things for Better Living Through Chemistry: Seven Better Products We Didn't Need But Now Can't Live Without
by Carol Westbrook
"Our house will never have that old people smell!" my husband said when he discovered Febreze. Yes, it's true! Using highly sophisticated chemistry (described below), Febreze truly eliminates odors, not just mask them with scent like air fresher. This was when I realized that the 1960's promise made by DuPont was being fulfilled, "Better Things for Better Living Through Chemistry!" I've put together seven of my favorite products that chemistry has improved, excluding the obvious true advances in medicine, electronics, energy and so on. Instead, I've highlighted products we probably did not even need, but now can't live without. Who made them, and how do they work?
1. Super Glue ©
Super Glue delivers what its name promises: it can stick almost anything together with a bond so strong that a 1-inch square can hold more than a ton. Besides household projects and repair, it's an effective skin adhesive for cuts, and those nasty dry-skin cracks you get on your hands in the winter. The myth is that Super Glue, or cyanoacrylate (C5H5NO2) was created as surgical adhesive for WWII field hospitals. In reality, it was invented by Goodrich in 1942 as a potential plastic for gunsights; it was rejected because its annoying property of sticking to everything made it impossible to fabricate. Fast forward to 1951, when it was rediscovered by scientists Harry Coover and Fred Joyner at Eastman Kodak, who recognized its potential as a glue. Initially it was used industrially, but in the 1970s it was introduced as a consumer project that rapidly took off.
Cyanoacrylate is a small molecule that binds to itself creating long chains, or polymers, when exposed to water--including water vapor in the air. The polymers are extremely strong acrylic plastics that rapidly bind whatever they contact when polymerizing. Unlike many adhesives, Super Glue cures almost instantly and can stick your fingers together before you can wipe it off. For obvious reasons it is packaged in small, one-use containers.
2. Post-It Notes ©
Post-It-Notes revolutionized the modern office, second only to personal computers. Offices are covered with these little papers, which have become ubiquitous in your home, too. The Windows computer operating system even has digital yellow "post-it" notes to "paste" on your screen.
The story of post-it's invention is a great example of collaboration between industry and the entrepreneur. In 1974, Art Fry, an employee at 3M (maker of Scotch tape), heard about his colleague, Dr. Spencer Silver's, 1968 invention of a low-tack, reusable, pressure-sensitive adhesive, which so far had no commercial application. Fry developed the idea, under 3M's officially sanctioned "permitted bootlegging policy." He used yellow paper since that was the only scrap paper at the lab next door. After a mediocre launch in 1977, they were re-branded and released in 1979 as Post-It Notes.
These little colored papers use re-adherable, pressure-sensitive glue made with tiny and variably-sized microcapsules of adhesives, 10 to 100 times larger than the glue particles on conventional sticky tape. Each press released only enough adhesive force to hold the paper in the little note, but because of the large number of glue capsules they can be re-used many time before they give up the ghost. Similarly, the USPS also uses pressure-sensitive adhesives on lick-free stamps--another office-changing technology; USPS stamps are designed so they cannot be peeled and re-used, much to the dismay of stamp collectors who cannot easily remove them from envelopes.
Like many game-changing inventions, Teflon was discovered by accident. Dr. Roy Plunkett, a research scientist at DuPont in New Jersey in 1938, was looking for non-toxic alternatives to refrigerants to replace the sulfur dioxide and ammonia then in use. One potential chemical, tetrafluorethylene, TFE was stored as a gas in a small cylinder, but when it was opened later the gas was gone, and instead the cylinder contained a waxy white powder. The TFE had polymerized to polytetrafluorethylene, PFTE.
Tests showed PFTE to be one of the most frictionless substances known to man. It was also non-corrosive, chemically stable, and melted only at very high temperatures. Unlike polymers such as Super Glue, FTE polymer has virtually no Van Der Waals (adhesive) forces, the molecular "pull" that makes things stick together. Three years later PFTE was patented by Du Pont as Teflon©, and sold for industrial use. In 1962 someone invented a method to make this non-sticky stuff adhere to a frying pan, creating a pan so slippery that you could fry an egg without using oil. And our way of cooking changed forever.
4. Febreze© odor remover
Febreze was introduced as a laundry cleaning aid by Proctor and Gamble in 1996. Its invention is attributed to Toan Trinh, a professor of chemistry at the University of Saigon, who was recruited by Procter & Gamble, and relocated to the US just one week before the fall of Saigon in 1975! Initially created as a laundry additive, its odor-removing property was quickly recognized. The active ingredient in Febreze is beta-cyclodextrin, a sugar molecule that is shaped like a donut, shown in the figure at the right. When you spray Febreze into the air or on a garment, the smelly molecules dissolve in the droplets and are quickly drawn into the"donut." The smelly molecule is still present, but the donut-smell mix cannot bind to the receptors in your nose that recognize odors, so you can't smell it. The odiferous molecules are washed out with the laundry, or dry with the droplets, and the smell is gone forever.
5. DEET Mosquito repellant
DEET (N,N-diethyle-m-toluamide) was developed in 1944 by the US Dept. of Agriculture for the Army to use in jungle warfare after several disastrous experiences in WWII. DEET failed as a pesticide, but was noted to keep biting insects away. Used in wartime Vietnam and Southeast Asia, DEET entered civilian life in 1957.
DEET repels mosquitoes, flies, chiggers and ticks more effectively than natural products such as citronella. It was long thought to work by blocking the insect's receptors for a substance that is present in human sweat and breath, 1-octen-3-ol, which is a main attractant for these pesky bugs. Newer research, though, suggests that it may do more than distract mosquitoes, DEET actively drives them away. It is now indispensible for worry-free summers outdoors, keeping us free from Zika, Lyme Disease, West Nile Virus, and St. Louis Encephalitis in our own backyards.
6. Press-N-Seal© plastic wrap.
Plastic film for food storage has been around for a long time, but Press-N-Seal is a quantum improvement. Standard kitchen plastic wrap is made of low-density polyethylene. It is a barrier to water and air, but does not stick well to itself or to containers. Press-N-Seal, on the other hand, sticks to everything, including itself, making a watertight seal. I tried covering a glass of ice water with this miraculous film, and sure enough, it held the water even when turned upside down! The more you use it, the more uses you can think of--protecting you computer keyboard while cooking, covering your morning coffee mug when commuting, or wrapping your wet toothbrush for traveling. There are even online user groups that sing its praises!
Peter W Hamilton and Kenneth S McGuire, two scientists at Proctor and Gamble, invented and patented the underlying technology in 1996. The sticky properties of this thin plastic film are due to the fact that its surface is covered by sharp, raised packets that contain a pressure-sensitive adhesive, not unlike Post-It-Notes. The adhesive in Press-N-Seal, though, is edible, similar to chewing gum, which makes it safe for food storage.
7. Hazel Bishop's Lasting Lipstick
Women have painted their lips since the dawn of civilization, luring their hunter-gatherer husbands back home to the farm. Lipstick as we know it, in cylindrical containers, made its appearance in Europe in 1911, and the US in 1915. Made with natural dyes such as carmine red, in a base of beeswax and castor oil, the color didn't last, and it smeared off when kissing. Frequent trips to the powder room were necessary to re-apply it. Lipstick made to last longer would dry out and become irritating on the lips.
Hazel Bishop was an organic chemist who worked for Standard Oil developing wartime bomber fuels. In the evenings, in her own New York kitchen, she created the first long-lasting, no-smear lipstick by
incorporating lanolin into the base so the lips would not dry out. It was a big hit when introduced in 1949. "It stays on YOU... not on HIM" the ads promised. By the 1990s, cosmetic companies introduced 2-step, long-lasting lip color, requiring the application of a transparent acrylate polymer over the colored base, eventually incorporating the acrylate directly into a one-step product. This long-lasting technology is now used in in mascaras, concealers, foundations, eye shadows and even sunscreens, keeping you beautiful longer, in any weather.
The list is endless--these are only a few of my favorites. One common theme stands out, however. Almost all were the result of serious efforts by chemists to create a truly significant advance that failed... only to be given new life by another equally creative genius who recognized the potential to appeal to the average consumer, who is always looking for the next, better thing.
Monday, July 18, 2016
Algocracy: Outsourcing Governance to Algorithms
by Muhammad Aurangzeb Ahmad
In the late 17th century Gottfried Leibniz conceived of a machine that could be used to settle arguments so that instead of arguing people will just settle dispute by saying "let us calculate." On closer inspection this idea has an uncanny resemblance to deciding disputes by delegating the decisions to algorithms. This is no longer the realm of Science Fiction as not only do algorithms already make decision on our behalf but they also make biased decisions on our behalf. Welcome to the world of Algocracy, which refers to a system of governance based on rule by algorithms.
The problem of Algocracy has been brought to the fore recently when reporters from ProPublica did an investigative analysis of a prisoner scoring software and determined that it was negatively biased towards black people. Consider two people, one black and the other one white, given the same criminal record, a commercial tool called COMPAS employed by law-enforcement agencies, would give a higher risk score for the black person. This would result in tougher convictions and longer sentences for Black people. ProPublica found a large number of examples where the non-black person with a lower risk score went on to commit more crimes but the black person did not commit any crime. Even Eric Holder weighed in on this debate by cautioning that such scoring systems are biasing the system against certain minority groups. One of the implications here is that algorithms already have much say in how our society is run. Given the proliferation of big data the role of algorithmic governance is only going to get bigger not smaller. We are already living under an Algocracy, its just that it is not evenly distributed yet.
Where does the allure of Algocracy come from? What Algocracy offers us is an “opportunity” to absolve us of moral responsibility by outsourcing it to machines, a point raised multiple times by the Philosopher Evan Selinger.
While much social progress has been made in the US since the end of the Jim Crow era and the civil rights movement institutional racism is much harder to eradicate. With laws that are on the books one can point towards individual people or groups who drafted such laws but with algorithms one can absolve oneself of the responsibility and point towards the alleged impartiality of algorithms. Even if we assume that the algorithms themselves can be unbiased, at the very least, the data that is fed to the algorithms can introduce bias. I have argued elsewhere that the data which is fed into the algorithms can make them take certain political ‘stance.’
To drive home this point consider what Google’s suggest function return’s when one searches for information about different ethnic and religious groups. Notice that the terms associated with white people are neutral but that is not the case when searching for black people or Muslims. Now it is not the case that Google or other search engines are biased against certain groups but rather the suggestions are based on what users search for. The bias shown by the search algorithms is actually the bias of the people who are using the search engine.
What this small example shows is that stating that Algorithms are always unbiased is problematic at best. If usage data can bias suggestions then imagine what would happen if the data was ‘hand-crafted’ to bias the system. Thus there is one thing that Leibniz did not anticipate – algorithms are designed by people who may be biased themselves. While Leibniz may have been the grandfather of this concept the latest incarnation of this conception was brought to fore by A. Aneesh in 2006 with his book Virtual Migration who observed the threat of computer based systems to constrain human decision making.
Largely hidden from the public conscious Algocracy already penetrated large parts of our social infrastructure. Consider advocacy organizations and lobbying groups routinely rank congressmen on their favorability based on their past voting records. Software can be buggy and we already have instances where bugs or mistakes in the code were responsible for wrongful foreclosures. However reversing such decisions were difficult because nobody expects a computer to be wrong. The problem is that one can pick and choose data to ‘prove’ any point, the same goes for data that goes inside the machine. Also, the data being collected may biased to begin with but the people who are collecting the data may not be aware of it.
Even if one argues that bias cannot be eliminated but we can at least agree that it can be greatly reduced. A case in point is the Google automatic image tagging fiasco from last year which mistakenly tagged Black folks as Gorillas. While it may be that none of the programmers who worked on the system was racist but the net effect of using biased data resulted in a situation with strong racist undertones.
Lets not lull ourselves to the misconception that it will just be governments who will be availing themselves of the opportunity to automate. Corporations already have enough resource and in some cases even more computing resources than most governments, the infamous episode where Target was able to predict that a teenager was pregnant is a case in point. Now imagine the Saudi Department of Propagation of Virtue and Elimination of Vice run by a “virtuous” computer: by monitoring all of your activities one could even device a virtue score analogous to credit ratings. Even worse scenario would be an Algocracy modeled after the mind of the Dear Leader of North Korea where punishment for thought crimes would indeed become a reality. These scenarios may sound far fetched but one only has to look at the People’s Republic to see that it is already rolling out such a system to keep the people in line.
Once we start going this route then it may next to impossible to draw the line of demarcation for human vs. machine governance. Why even have human lawmakers make decisions? After all algorithms can always be more efficient than people. So lets have each congressional district code its own lawmaker bot and then let them decide laws on our behalf. Of course reality will be different, just as Wikipedia spawned Conservapedia as its counterpoint one can imagine Liberal and Conservative versions of Algocracy.
Monday, June 20, 2016
The Mesh of Civilizations in Cyberspace
by Jalees Rehman
"The great divisions among humankind and the dominating source of conflict will be cultural. Nation states will remain the most powerful actors in world affairs, but the principal conflicts of global politics will occur between nations and groups of different civilizations. The clash of civilizations will dominate global politics."
—Samuel P. Huntington (1972-2008) "The Clash of Civilizations"
In 1993, the Harvard political scientist Samuel Huntington published his now infamous paper The Clash of Civilizations in the journal Foreign Affairs. Huntington hypothesized that conflicts in the post-Cold War era would occur between civilizations or cultures and not between ideologies. He divided the world into eight key civilizations which reflected common cultural and religious heritages: Western, Confucian (also referred to as "Sinic"), Japanese, Islamic, Hindu, Slavic-Orthodox, Latin-American and African. In his subsequent book "The Clash of Civilizations and the Remaking of the World Order", which presented a more detailed account of his ideas and how these divisions would fuel future conflicts, Huntington also included the Buddhist civilization as an additional entity. Huntington's idea of grouping the world in civilizational blocs has been heavily criticized for being overly simplistic and ignoring the diversity that exists within each "civilization". For example, the countries of Western Europe, the United States, Canada and Australia were all grouped together under "Western Civilization" whereas Turkey, Iran, Pakistan, Bangladesh and the Gulf states were all grouped as "Islamic Civilization" despite the fact that the member countries within these civilizations exhibited profound differences in terms of their cultures, languages, social structures and political systems. On the other hand, China's emergence as a world power that will likely challenge the economic dominance of Western Europe and the United States, lends credence to a looming economic and political clash between the "Western" and "Confucian" civilizations. The Afghanistan war and the Iraq war between military coalitions from the "Western Civilization" and nations ascribed to the "Islamic Civilization" both occurred long after Huntington's predictions were made and are used by some as examples of the hypothesized clash of civilizations.
It is difficult to assess the validity of Huntington's ideas because they refer to abstract notions of cultural and civilizational identities of nations and societies without providing any clear evidence on the individual level. Do political and economic treaties between the governments of countries – such as the European Union – mean that individuals in these countries share a common cultural identity?
Also, the concept of civilizational blocs was developed before the dramatic increase in the usage of the internet and social media which now facilitate unprecedented opportunities for individuals belonging to distinct "civilizations" to interact with each other. One could therefore surmise that civilizational blocs might have become relics of the past in a new culture of global connectivity. A team of researchers from Stanford University, Cornell University and Yahoo recently decided to evaluate the "connectedness" of the hypothesized Huntington civilizations in cyberspace and published their results in the article "The Mesh of Civilizations in the Global Network of Digital Communication".
The researchers examined Twitter users and the exchange of emails between Yahoo-Mail users in 90 countries with a minimum population of five million. In total, they analyzed "hundreds of millions of anonymized email and Twitter communications among tens of millions of worldwide users to map global patterns of transnational interpersonal communication". Twitter data is public and freely available for researchers to analyze whereas emails had to be de-identified for the analysis. The researchers did not have any access to the content of the emails, they only analyzed whether users any given country were emailing users in other countries. The researchers focused on bi-directional ties. This means that ties between Twitter user A and B were only counted as a "bi-directional" tie or link if A followed B and B followed A on Twitter. Similarly, for the analysis of emails analysis, the researchers only considered email ties in which user X emailed user Y, and there was at least one email showing that user Y had also emailed user X. This requirement for bi-directionality was necessary to exclude spam tweets or emails in which one user may send out large numbers of messages to thousands of users without there being any true "tie" or "link" between the users that would suggest an active dialogue or communication.
The researchers then created a cluster graph which is shown in the accompanying figure. Each circle represents a country and the 1000 strongest ties between countries are shown. The closer a circle is to another circle, the more email and Twitter links exist between individuals residing in the two countries. For the mathematical analysis to be unbiased, the researchers did not assign any countries to "civilizations" but they did observe key clusters of countries emerge which were very close to each other in the graph. They then colored in the circles with colors to reflect the civilization category as defined by Huntington and also colored ties within a civilization as the same color whereas ties between countries of two distinct civilization categories were kept in gray.
At first glance, these data may appear as a strong validation of the Huntington hypothesis because the circles of any given color (i.e. a Huntington civilization category) are overall far closer to each other on average that circles of a different color. For example, countries belonging to the "Latin American Civilization" (pink) countries strongly cluster together and some countries such as Chile (CL) and Peru (PE) have nearly exclusive intra-civilizational ties (pink). Some of the "Slavic-Orthodox Civilization" (brown) show strong intra-civilizational ties but Greece (GR), Bulgaria (BG) and Romania (RO) are much closer to Western European countries than other Slavic-Orthodox countries, likely because these three countries are part of the European Union and have shared a significant cultural heritage with what Huntington considers the "Western Civilization". "Islamic Civilization" (green) countries also cluster together but they are far more spread out. Pakistan (PK) and Bangladesh (BD) are far closer to each other and to India (IN), which belongs to the "Hindu Civilization" (purple) than to Tunisia (TN) and Yemen (YE) which Huntington also assigned to an ‘Islamic Civilization".
One obvious explanation for there being increased email and Twitter exchanges between individuals belonging to the same civilization is the presence of a shared language. The researchers therefore analyzed the data by correcting for language and found that even though language did contribute to Twitter and email ties, the clustering according to civilization was present even when taking language into account. Interestingly, of the various factors that could account for the connectedness between users, it appeared that religion (as defined by the World Religion Database) was one of the major factors, consistent with Huntington's focus on religion as a defining characteristic of a civilization. The researchers conclude that "contrary to the borderless portrayal of cyberspace, online social interactions do not appear to have erased the fault lines Huntington proposed over a decade before the emergence of social media." But they disagree with Huntington in that closeness of countries belonging to a civilization does not necessarily imply that it will lead to conflicts or clashes with other civilizations.
It is important to not over-interpret one study on Twitter and Email links and make inferences about broader cultural or civilizational identities just because individuals in two countries follow each other on Twitter or write each other emails. The study did not investigate identities and some of the emails could have been exchanged as part of online purchases without indicating any other personal ties. However, the data presented by the researchers does reveal some fascinating new insights about digital connectivity that are not discussed in much depth by the researchers. China (CN) and Great Britain (GB) emerge as some of the most highly connected countries at the center of the connectivity map with strong extra-civilizational ties, including countries in Africa and India. Whether this connectivity reflects the economic growth and increasing global relevance of China or a digital footprint of the British Empire even decades after its demise would be a worthy topic of investigation. The public availability of Twitter data makes it a perfect tool to analyze the content of Twitter communications and thus define how social media is used to engage in dialogue between individuals across cultural, religious and political boundaries.
Huntington, S. P. (1993). The Clash of Civilizations. Foreign Affairs, 72(3) 22-49.
State, B., Park, P., Weber, I., & Macy, M. (2015). The mesh of civilizations in the global network of digital communication. PLoS ONE, 10(5), e0122543.
Monday, May 23, 2016
Kind Of Like A Metaphor
"I got my own pure little bangtail mind and
the confines of its binding please me yet."
~ Neal Cassady, letter to Jack Kerouac
One of the curious phenomena that computing in general, and artificial intelligence in particular, has emphasized is our inevitable commitment to metaphor as a way of understanding the world. Actually, it is even more ingrained than that: one could argue that metaphor, quite literally, is our way of being in the world. A mountain may or may not be a mountain before we name it - it may not even be a mountain until we name it (for example, at what point, either temporally or spatially, does it become, or cease to be, a mountain?). But it will inhabit its ‘mountain-ness' whether or not we choose to name it as such. The same goes for microbes, or the mating dance of a bird of paradise. In this sense, the material world existed, in some way or other, prior to our linguistic entrance, and these same things will continue to exist following our exit.
But what of the things that we make? Wouldn't these things somehow be more amenable to a more purely literal description? After all, we made them, so we should be able to say exactly what these things are or do, without having to resort to some external referents. Except we can't. And even more troubling (perhaps) is the fact that the more complex and representative these systems become, the more irrevocably entangled in metaphor do we find ourselves.
In a recent Aeon essay, Robert Epstein briefly guides us through a history of metaphors for how our brains allegedly work. The various models are rather diverse, ranging from hydraulics to mechanics to electricity to "information processing", whatever that is. However, there is a common theme, which I'll state with nearly the force and certainty of a theorem: the brain is really complicated, so take the most complicated thing that we can imagine, whether it is a product of our own ingenuity or not, and make that the model by which we explain the brain. For Epstein - and he is merely recording a fact here - this is why we have been laboring under the metaphor of brain-as-a-computer for the past half-century.
But there is a difference between using a metaphor as a shorthand description, and its broader, more pervasive use as a guide for understanding and action. In a 2013 talk, Hamid Ekbia of Indiana University gives the example of the term ‘fatigue' used in relation to materials. Strictly speaking, ‘fatigue' is "the weakening of a material caused by repeatedly applied loads. It is the progressive and localised structural damage that occurs when a material is subjected to cyclic loading." (I generally don't like linking to Wikipedia but in this instance the banality of the choice serves to underline the point). Now, for materials scientists and structural engineers, the term is an explicit, well-bounded shorthand. One doesn't have pity for the material in question; perhaps a poet would describe an old bridge's girders as ‘weary' but to an engineer those girders are either fatigued, or they are not. Once they are fatigued, no amount of beauty rest will assist them in recuperating their former, sturdy (let alone ‘well-rested' or ‘healthy') state.
The term ‘fatigue' is furtherly instructive because it illustrates the process by which metaphor spills out into the world. If a group of engineers are having a discussion around an instance of ‘fatigue' their use of the term in conversation is precise and understood. This is a consequence of the consistency of their training just as much as it's relevance to the physical phenomenon. After all, it's easier to say "the material is fatigued" than "the material has been weakened by the repeated application of loads, etc." But the integrity of a one-to-one relationship between a word and its explanation comes under pressure (so to speak) when this same group of experts presents its findings to a group of non-experts, such as politicians or citizens. Of course, taken by itself, the transition of a phrase such as ‘fatigue' does not have overly dramatic implications. What it does do, however, is invite the dissemination of other, adjacent metaphors into the conversation. Soon enough ‘fatigue', however rigorously defined, accumulates into declarations of the ‘exhausted' state of our nation's ‘ailing' infrastructure. There are no technical equivalents to these terms, which call us to action by insinuating that objects like roads and tunnels may be feeling pain, whereas at best we are the recipients of said suffering.
Intriguingly, the complexity of this semiotic opportunism ramps up quickly and considerably. Roads and bridges may be things that we have built, but they still exist in the world, and will continue to exist whether we fix them or not. They may remind us of our success or inadequacy, but their intended purpose is almost never unclear. On the other hand, there are other things that we have built, things that exist in a much more precarious sense - it may even be a stretch to call them objects - and whose success qua objects is also much more variable. This is where we find computation, software and artificial intelligence.
The purpose of computation, broadly speaking, is to perform an action - some kind of service, or analysis, that may or may not be regular (in the sense that it can be anticipated) and is rarely, if ever, regulated. In the world of infrastructure, you either make it across the bridge or you don't, and there are regulations meant to ensure a positive outcome. As Yoda advises, "Do or do not. There is no try." But computation is different. I am not talking about something linear, like programming a computer to add two numbers. With a search engine, for example, you may find the information or not; or what you find may be good enough, or you may think it's good enough but it's really not, and you'll never know. The service, or rather the experience of the service, becomes the object; the code, which is perhaps the true object, is obscured from your view. And we tend to be poor at processing this kind of ambiguity, and when faced with ambiguity we reach for metaphor as a sense-making bulwark against the messiness of the unknown.
As we expect more of our computing technologies, the ensuing purposes also shift temporally. Our software models the world around us, and the way in which we inhabit the world. As such, its utility is displaced into the future: we value it for its predictive nature. We want it to anticipate not simply what we need right now (let alone what we needed yesterday) but what we might want tomorrow, or six months from now. At this point we find ourselves squarely in a place of mind. That is, we expect our inventions to become extensions of ourselves, because we cannot seem to make the leap that something non-human can have any chance of assisting us at being better humans. Software (and specifically AI) is singularly pure in this regard, although traces already exist in previous technologies. So while we don't worry about making our bridges anything more than functional and, somewhat secondarily, aesthetically pleasing, we tend to additionally attribute human-like traits to ships, perhaps because we perceive our lives as much more committed to the latter's successful functioning. But while we may ascribe personality to ships, we go a step further and come to expect intelligence of the software that we make: witness the proliferation of chatbots and personal assistants, to the point that we can now consult articles about why chatbot etiquette may be important.
In the meantime, these technologies themselves are being generated via metaphor. After all, these are exceedingly complex pieces of software, designed, implemented and refined by hundreds of software engineers and other staff. It is inevitable that there should be philosophies that guide these efforts. According to Ekbia, every one of the ‘approaches' is fundamentally metaphorical in nature. That is, if you decide you're going to write software that will appear intelligent to its users, you have to put a stake in the ground as to what intelligence is, or at least how it is come by. And since we haven't really figured out how intelligence arises within ourselves to begin with, we wind up with a series of investments in a mutually exclusive array of metaphors.
Is intelligence symbolic, and therefore symbolically computable? People like Stephen Wolfram would say yes. Or perhaps intelligence arises if you have enough facts and ways to relate those facts; in which case Cyc and other expert systems are your ticket. Another approach to modeling intelligence has been getting the most press lately: the idea of reinforcement learning of neural networks. (Of course, this last one models how neurons work together within our own brains, so it is a double metaphor.)
The point is that all of these ‘approaches' are metaphorical in substance. We still have not been able to resolve the mind-body problem, or how consciousness somehow arises from the mass of neurons that are discrete, physical entities beholden to well-documented laws of nature. And even though lots of theories of mind have been disproven, the fact that we cannot agree on the nature of intelligence for ourselves implies that any idea of what a constructed intelligence may be is, by definition, a metaphor for something else. Science can avail itself of the luxury of not-knowing, of being able to say, "We are fairly certain that we know this much but no more, and these theories may or may not help us to push farther, but they also may fall apart and we'll have to start over". Technology, on the other hand, must deliver a solution - something that works from end to end. In the case of AI, where models must be robust, predictive and productive, the designers of a constructed intelligence cannot say, "Well, we know this much and the rest happens without us understanding it." Your respect for the truth results in no product, and a lot of angry investors. So metaphor in this sense is not a philosophical luxury, it's how you're able to ship any code at all.
Where things get really interesting in this kind of a world is when the metaphors start getting good at producing results. So now we find ourselves in a very weird situation. There are competing metaphors out there in the computational wild: symbolic, expert, neural network systems, as well as others. Increasingly, hybrid systems are also appearing. What if some or even all of these approaches succeed in functioning 'intelligently'? I have to put the word in quotes here, because it's pretty clear that, without a mutually agreed-upon anchoring definition, we have ventured into some very murky waters. These waters are made all the more turbulent because technology's need to solve problems for us (or perhaps to also create them) will continue to push what we consider as viably or usefully 'intelligent'.
The fact is that no AI outfit or its investors will sit around waiting for the scientific community to settle on a model for cognition and then proceed to build products consistent with that model. The truth is nice, but there are (market) demands that need to be met now. If science can supply industry with signposts on how to build better technology, great. At the same time, if the product solves the clients' or users' problems then who cares if it's really intelligent or not? Recall the old adage: Nothing succeeds like success. The tricky bit is that, with enough such success, our very definition of what is intelligent may be on the verge of shifting. Next month I'll look at the implications of living in a world awash in these kinds of feedback loops.
Should Biologists be Guided by Beauty
by Jalees Rehman
Lingulodinium polyedrum is a unicellular marine organism which belongs to the dinoflagellate group of algae. Its genome is among the largest found in any species on this planet, estimated to contain around 165 billion DNA base pairs – roughly fifty times larger than the size of the human genome. Encased in magnificent polyhedral shells, these bioluminescent algae became important organisms to study biological rhythms. Each Lingulodinium polyedrum cell contains not one but at least two internal clocks which keep track of time by oscillating at a frequency of approximately 24 hours. Algae maintained in continuous light for weeks continue to emit a bluish-green glow at what they perceive as night-time and swim up to the water surface during day-time hours – despite the absence of any external time cues. When I began studying how nutrients affect the circadian rhythms of these algae as a student at the University of Munich, I marveled at the intricacy and beauty of these complex time-keeping mechanisms that had evolved over hundreds of millions of years.
Over the course of a quarter of a century, I have worked in a variety of biological fields, from these initial experiments in marine algae to how stem cells help build human blood vessels and how mitochondria in a cell fragment and reconnect as cells divide. Each project required its own set of research methods and techniques, each project came with its own failures and successes. But with each project, my sense of awe for the beauty of nature has grown. Evolution has bestowed this planet with such an amazing diversity of life-forms and biological mechanisms, allowing organisms to cope with the unique challenges that they face in their respective habitats. But it is only recently that I have become aware of the fact that my sense of biological beauty was a post hoc phenomenon: Beauty was what I perceived after reviewing the experimental findings; I was not guided by a quest for beauty while designing experiments. In fact, I would have been worried that such an approach might bias the design and interpretation of experiments. Might a desire for seeing Beauty in cell biology lead one to consciously or subconsciously discard results that might seem too messy?
I was prompted to revisit the role of Beauty in biology while reading a masterpiece of scientific writing, "Dreams of a Final Theory" by the Nobel laureate Steven Weinberg in which he describes how the search for Beauty has guided him and many fellow theoretical physicists to search for an ultimate theory of the fundamental forces of nature. Weinberg explains that it is quite difficult to precisely define what constitutes Beauty in physics but a physicist would nevertheless recognize it when she sees it.
One such key characteristic of a beautiful scientific theory is the simplicity of the underlying concepts. According to Weinberg, Einstein's theory of gravitation is described in fourteen equations whereas Newton's theory can be expressed in three. Despite the appearance of greater complexity in Einstein's theory, Weinberg finds it more beautiful than Newton's theory because the Einsteinian approach rests on one elegant central principle – the equivalence of gravitation and inertia. Weinberg's second characteristic for beautiful scientific theories is their inevitability. Every major aspect of the theory seems so perfect that it cannot be tweaked or improved on. Any attempt to significantly modify Einstein's theory of general relativity would lead to undermining its fundamental concepts, just like any attempts to move around parts of Raphael's Holy Family would weaken the whole painting.
Can similar principles be applied to biology? I realized that when I give examples of beauty in biology, I focus on the complexity and diversity of life, not its simplicity or inevitability. Perhaps this is due to the fact that Weinberg was describing the search of fundamental laws of physics, laws which would explain the basis of all matter and energy – our universe. As cell biologists, we work several orders of magnitude removed from these fundamental laws. Our building blocks are organic molecules such as proteins and sugars. We find little evidence of inevitability in the molecular pathways we study – cells have an extraordinary ability to adapt. Mutations in genes or derangement in molecular signaling can often be compensated by alternate cellular pathways.
This also points to a fundamental difference in our approaches to the world. Physicists searching for the fundamental laws of nature balance the development of fundamental theories whereas biology in its current form has primarily become an experimental discipline. The latest technological developments in DNA and RNA sequencing, genome editing, optogenetics and high resolution imaging are allowing us to amass unimaginable quantities of experimental data. In fact, the development of technologies often drives the design of experiments. The availability of a genetically engineered mouse model that allows us to track the fate of individual cells that express fluorescent proteins, for example, will give rise to numerous experiments to study cell fate in various disease models and organs. Much of the current biomedical research funding focuses on studying organisms that provide technical convenience such as genetically engineered mice or fulfill a societal goal such as curing human disease.
Uncovering fundamental concepts in biology requires comparative studies across biology and substantial investments in research involving a plethora of other species. In 1990, the National Institutes of Health (NIH – the primary government funding source for biomedical research in the United States) designated a handful of species as model organisms to study human disease, including mice, rats, zebrafish and fruit flies. A recent analysis of the species studied in scientific publications showed that in 1960, roughly half the papers studied what would subsequently be classified as model organisms whereas the other half of papers studied additional species. By 2010, over 80% of the scientific papers were now being published on model organisms and only 20% were devoted to other species, thus marking a significant dwindling of broader research goals in biology. More importantly, even among the model organisms, there has been a clear culling of research priorities with a disproportionately large growth in funding and publications for studies using mice. Thousands of scientific papers are published every month on the cell signaling pathways and molecular biology in mouse and human cells whereas only a minuscule fraction of research resources are devoted to studying signaling pathways in algae.
The question of whether or not biologists should be guided by conceptual Beauty leads us to the even more pressing question of whether we need to broaden biological research. If we want to mirror the dizzying success of fundamental physics during the past century and similarly advance fundamental biology, then we need substantially step-up investments in fundamental biological research that is not constrained by medical goals.
Dietrich, M. R., Ankeny, R. A., & Chen, P. M. (2014). Publication trends in model organism research. Genetics, 198(3), 787-794.
Weinberg, S. (1992). Dreams of a final theory. Vintage. (2014).
Monday, April 18, 2016
Open Your Mouth, Stick Out Your Tongue, and Say "Five"
by Carol A. Westbrook
In case you have never seen one, a Press Ganey survey is a multi-page questionnaire in which you asked to rate your experiences during a hospital or outpatient clinic visit, from 0 (bad) to 5 (best). The completed questionnaire is mailed to Press Ganey, which compiles and analyzes the data, and reports the results to the hospital or health care system that ordered the survey.
The survey asks questions like, "Did you have to wait long to see your doctor? Was the staff pleasant? Was the waiting room clean? Did your doctor take enough time to explain things to you? Did your doctor smile and shake your hand? Did the valet parker return your car promptly?" It also does not ask questions that the health care organization does not want to hear, for example, "Was your doctor given enough time with you? Did you actually get to see the doctor instead of the nurse practitioner? "Press Ganey has been called an Angels' List for clinics and hospitals.
That is why administrators love Press Ganey surveys--because they know that good scores will bring in more business. They also have the side benefit of providing an outlet for unsatisfied or angry patients who otherwise would be pounding on their door. Giving a doctor a "0" makes a disgruntled customer feel that he is addressing a problem, without the manager ever having to do anything about it!
Most importantly, though, patient satisfaction scores provide "objective" data that can be used to manipulate physicians by lowering their salaries or even firing them if they do not maintain a high score.
Patients are frequently surprised to learn that the salaries of their doctors are tied to their survey scores, yet this is the reality for almost two-thirds of physicians employed by health care groups. For some doctors, 10% to 20% or more of their salary is at risk. Fortunately for me, when I was an employed physician it was only about 1%.
Why do administrators push physicians to drive up their patient satisfaction scores? For two simple reasons: higher scores bring in more business, but will also result in higher payments for their business, due to the "Pay for Performance" mandate.
To understand the "Pay for Performance" mandate, it helps to look at the history of the Press Ganey organization. The survey was created in 1985 when an anthropologist and a sociologist were asked to provide a tool for hospitals to determine if patients were satisfied with the care that they received. The business expanded rapidly after 2002, when CMS, the Federal agency that administers Medicare, announced a program to survey patients and require public reporting of the results. This was the result of a Federal a mandate to empower patients to make more informed decisions about health care by improving accountability and public disclosure. In 2003 Press Ganey went private for $100 million and was sold four years later for $673 million. Today, it typically reports over $200 million in yearly sales.
Press Ganey's business got another boost recently with the ObamaCare "Pay for Performance" initiative. Hospitals that perform poorly on quality measures forfeit 1% of their Medicare payments, a number that will double in 2017, putting some $2 billion at risk. Thirty percent of that determination will be based on hospital rankings from mandated patient surveys. Because so much is at stake, administrators push their physicians to generate higher scores. A Press Ganey survey for a large health care system such as the Cleveland Clinic could easily cost a half million dollars. Who pays for this? You, the patient. It is yet one more reason that a night in a hospital room costs more than a stay in a luxury New York hotel.
Yet it is hard to imagine that Press Ganey truly addresses the "pay for performance" mandate. A patient survey is not capable of measuring doctors' competence or performance. What they measure, instead, is patient satisfaction with their visit to the doctor. And there is a serious downside to coercing a physician to accede to their patients' desires, since those desires may be medically inappropriate or even harmful.
A recent study published by researchers at UC Davis (1), using data from nearly 52,000 adults, found that the most "satisfied" patients spent 9% more on health care and prescription drugs, were more likely to be admitted to hospital, and had higher death rates. It has been speculated that these patients were more satisfied because they were given what they requested--including extra tests and medications, which may lead to more harm or complications. Dr. Aleksandra Zgierska, an addiction specialist, believes that the epidemic rise in narcotic addiction is partly due to physicians' over-prescribing pain medication in order to improve their patient satisfaction scores. In an article for the Journal of the American Medical Association in 2012 (2) she wrote, "Patients can report dissatisfaction based on real or perceived problems including whether a clinician did or did not prescribe a desired medication. In some institutions, the first question on the patient satisfaction survey queries the extent of agreement with the statement: 'I was satisfied with the way my doctor treated my pain.'"
Pressures on physicians to drive up patient satisfaction scores to provide better care may thus have the opposite effect of leading to worse care, while further driving up costs. In my opinion it would make more sense if the hospital administrators--who control scheduling, cleanliness, and ambiance--had their salaries tied to Press Ganey scores, and let the physicians establish their own performance measures. There are a number of professional organizations that provide this service, such as The Joint Commission on Accreditation of Healthcare Organizations, whose surveys have led to improvements such as decreases in hospital-acquired infections, fewer falls, and fewer unnecessary blood transfusions. But Joint Commission surveys are performed by health care professionals rather than patients, while Press Ganey surveys are performed by a multi-million dollar company with great lobbying power. Perhaps it is time to rethink the "Pay for Performance" mandate.
I was inclined to tell him to throw it in the trash, given my dislike of patient satisfaction surveys. But then I had another thought.
"Answer every question with a 5," I suggested.
"But it's a survey. I can't give all fives!" he protested. Like many adults raised in our school system, he couldn't give all 5's any more than he could answer all his SAT questions with the same letter answer. Or could he?
"Why not?" I asked, "You like your doctor, and there was nothing negative about your experience."
He did as I suggested.
If we patients took a stand and agreed to score every survey item with a "5," then the patient satisfaction survey, and its unnecessary added costs, would become meaningless. With luck, it would be replaced by a system that truly measures a doctor's competence and performance, rather than office ambiance. This is unlikely to happen, but I can dream, can't I?
Monday, March 28, 2016
Nostalgia is a Muse
by Jalees Rehman
"Let others praise ancient times. I am glad that I was born in these."
- Ovid in "Ars Amatoria"
When I struggle with scientist's block, I play 1980s music with the hope that the music will inspire me. This blast from the past often works for me. After listening to the songs, I can sometimes perceive patterns between our various pieces of cell biology and molecular biology data that had previously eluded me and design new biological experiments. But I have to admit that I have never performed the proper music control studies. Before attributing inspirational power to songs such as "99 Luftballons", "Bruttosozialprodukt" or "Billie Jean", I ought to spend equal time listening to music from other decades and then compare the impact of these listening sessions. I have always assumed that there is nothing intrinsically superior or inspirational about these songs, they simply evoke memories of my childhood. Eating comfort foods or seeing images of Munich and Lagos that remind me of my childhood also seem to work their muse magic.
My personal interpretation has been that indulging nostalgia somehow liberates us from everyday issues and worries – some trivial, some more burdensome - which in turn allows us to approach our world with a fresh, creative perspective. It is difficult to make such general sweeping statements based on my own anecdotal experiences and I have always felt a bit of apprehension about discussing this with others. My nostalgia makes me feel like an old fogey who is stuck in an ossified past. Nostalgia does not have a good reputation. The German expression "Früher war alles besser!" (Back then, everything used to be better!) is used in contemporary culture to mock those who always speak of the romanticized past with whimsical fondness. In fact, the expression nostalgia was coined in 1688 by the Swiss medical student Johannes Hofer. In his dissertation "Dissertatio Medica de Nostalgia oder Heimweh", Hofer used nostalgia as an equivalent of the German word Heimweh ("home-ache"), combining the Greek words nostos(homecoming) and algos (ache or pain), to describe a medical illness characterized by a "melancholy that originates from the desire to return to one's homeland". This view of nostalgia as an illness did not change much during the subsequent centuries where it was viewed as a neurological or psychiatric disorder.
This view has been challenged by the University of Southampton researchers Constantine Sedikides and Tim Wildschut, who have spent the past decade studying the benefits of nostalgia. Not only do they disavow its disease status, they have conducted numerous studies which suggest that nostalgia can make us more creative, open-minded and charitable. The definition of nostalgia used by Sedikides and Wildschut as a "sentimental longing for one's past" is based on the contemporary usage by laypersons across many cultures. This time-based definition of nostalgia also represents a departure from its original geographical or cultural coinage by Hofer who viewed it as a longing for the homeland and not one's personal past.
In one of their most recent experiments, Sedikides and Wildschut investigate nostalgia as a "mnemonic muse". The researchers first evoked nostalgic memories in participants with the following prompt:
"Please think of a nostalgic event in your life. Specifically, try to think of a past event that makes you feel most nostalgic. Bring this nostalgic experience to mind. Immerse yourself in the nostalgic experience. How does it make you feel?"
Importantly, each experiment also involved a control group of participants who were given a very different prompt:
"Please bring to mind an ordinary event in your life. Specifically, try to think of a past event that is ordinary. Bring this ordinary experience to mind. Immerse yourself in the ordinary experience. How does it make you feel?"
This allowed the researchers to compare whether specifically activating nostalgia had a distinct effect from merely activating a general memory.
After these interventions, participants in the nostalgia group and in the control group were asked to write a short story involving a princess, a cat and a race car. In an additional experiment, participants finished a story starting with the sentence: "One cold winter evening, a man and a woman were alarmed by a sound coming from a nearby house". After 30 minutes, of writing, the stories were collected and scored for the level of creativity by independent evaluators who had no knowledge of the experimental design or group that the participants belonged to. Participants who had experienced more nostalgia wrote more creative prose!
This is just one example of the dozens of studies conducted by Sedikides and Wildschut which show the benefits of nostalgia, such as providing inspiration, increasing trust towards outsiders and enhancing the willingness to donate to charities. What is the underlying mechanism for these benefits? Sedikides and Wildschut believe that our nostalgic memories provide a sense of belonging and support, which in turn helps our self-confidence and self-esteem. The comfort of our past gives us strength for our future.
Does this mean that this longing for the past is always a good thing? Not every form of nostalgia centers on personal childhood memories. For example, there is a form of ideological nostalgia expressed by groups who feel disenfranchised by the recent progress and long for days of former power and privilege. The South African sociologists van der Waal and Robbins recently described the popularity of a song about the Anglo-Boer waramong white Afrikaans-speakers in the post-Apartheid era which may have been rooted in a nostalgic affirmation of white Afrikaner identity. It is conceivable that similar forms of ideological nostalgia could be found in other cultures and states where privileged classes and races are losing ground to increased empowerment of the general population.
It is important that we distinguish between these two forms of nostalgia – personal childhood nostalgia and ideological group nostalgia – before "rehabilitating" nostalgia's reputation. The research by Sedikides and Wildschut clearly demonstrates that nostalgia can be a powerful tool to inspire us but we have to ensure that it is not misused as am ideological or political tool to manipulate us.
1. de Diego, F. F., & Ots, C. V. (2014). Nostalgia: a conceptual history. History of psychiatry, 25(4), 404-411.
2. Sedikides, C., & Wildschut, T. (2016). Past Forward: Nostalgia as a Motivational Force. Trends in cognitive sciences (published online Feb 18, 2016)
3. van Tilburg, W. A., Sedikides, C., & Wildschut, T. (2015). The mnemonic muse: Nostalgia fosters creativity through openness to experience.Journal of Experimental Social Psychology, 59, 1-7.
4. Van der Waal, K., & Robins, S. (2011). ‘De la Rey'and the Revival of ‘Boer Heritage': Nostalgia in the Post-apartheid Afrikaner Culture Industry. Journal of Southern African Studies, 37(4), 763-779.
Monday, February 29, 2016
Shame on You, Shame on Me: Shame as an Evolutionary Adaptation
by Jalees Rehman
Can shame be good for you? We often think of shame as a shackling emotion which thwarts our individuality and creativity. A sense of shame could prevent us from choosing a partner we truly love, speaking out against societal traditions which propagate injustice or pursuing a profession that is deemed unworthy by our peers. But if shame is so detrimental, why did we evolve with this emotion? A team of researchers led by Daniel Sznycer from the Center for Evolutionary Psychology at the University of California, Santa Barbara recently published a study in the Proceedings of the National Academy of Sciences which suggests that shame is an important evolutionary adaptation. According to their research which was conducted in the United States, Israel and India, the sense of shame helps humans avoid engaging in acts that could lead to them being devalued and ostracized by their community.
For their first experiment, the researchers enrolled participants in the USA (118 participants completed the study; mean age of 36; 53% were female) and India (155 participants completed the study, mean age of 31, 38% were female) using the online Amazon Mechanical Turk crowdsourcing platform as well as 165 participants from a university in Israel (mean age of 23; 81% female). The participants were randomly assigned to two groups and presented with 29 scenarios: The "shame group" participants were asked to rate how much shame they would experience if they lived through any given scenario and whereas the "audience group" participants were asked how negatively they would rate a third-party person of the same age and gender as the participants in an analogous scenario.
Here is a specific scenario to illustrate the study design:
Male participants in the "shame group" were asked to rate "At the wedding of an acquaintance, you are discovered cheating on your wife with a food server" on a scale ranging from 1 (no shame at all) to 7 (a lot of shame).
Female participants in the "shame group" were asked to rate "At the wedding of an acquaintance, you are discovered cheating on your husband with a food server" on a scale ranging from 1 (no shame at all) to 7 (a lot of shame).
Male participants in the "audience group", on the other hand, were asked to rate "At the wedding of an acquaintance, he is discovered cheating on his wife with a food server" on a scale ranging from 1 (I wouldn't view him negatively at all) to 7 (I'd view him very negatively).
Female participants in the "audience group" rated "At the wedding of an acquaintance, she is discovered cheating on her husband with a food server" on a scale ranging from 1 (I wouldn't view her negatively at all) to 7 (I'd view her very negatively).
To give you a sense of the breadth of scenarios that the researchers used, here are some more examples:
You stole goods from a shop owned by your neighbor.
You cannot support your children economically.
You get into a fight in front of everybody and your opponent completely dominates you with punch after punch until you're knocked out.
You receive welfare money from the government because you cannot financially support your family.
You are not generous with others.
For each of the 29 scenarios, the researchers created gender-specific "shame" and "audience" versions. The "audience group" reveals how we rate the bad behavior of others (devaluation) whereas the "shame group" provides information into how much shame we feel if we engage in that same behavior. By ensuring that participants only participated in one of the two groups, the researchers were able to get two independent scores – shame versus devaluation – for each scenario.
The key finding of this experiment was that the third-party devaluation scores were highly correlated with the shame scores in all three countries. For example, here are the mean "shame scores" for the wedding infidelity scenario indicating that people in all three countries would have experienced a lot of shame:
The devaluation scores from the third-party "audience group" suggested that people viewed the behavior very negatively:
For nearly all the scenarios, the researchers found a surprisingly strong correlation between devaluation and shame and they also found that the correlation was similarly strong in each of the surveyed countries.
The researchers then asked the question whether this correlation between personal shame and third-party negative valuation was unique to the shame emotion or whether other negative emotions such as anxiety or sadness would also correlate equally well with devaluation. This experiment was only conducted with the participants in the USA and India. The researchers found that even though the fictitious scenarios elicited some degree of anxiety and sadness in the participants, the levels of anxiety or sadness were not significantly correlated with the extent of devaluation. The researchers interpreted these results as suggesting that there is something special about shame because it tracks so closely with how bad behavior is perceived by others whereas sadness or anxiety do not.
How do these findings inform our view on the evolutionary role of shame? The researchers suggest that instead of designating shame as an "ugly" emotion, it is instead an excellent predictor of how our peers would view our behaviors and thus deter us from making bad choices that could undermine our relationships with members of our community. The strong statistical correlations between shame and negative valuation of the behaviors as well as the universality of this link in the three countries indeed support the conclusions of the researchers. However, there are also so important limitations of these studies. As with many evolutionary psychology studies, it is not easy to ascribe a direct cause-effect relationship based on a correlation. Does devaluation lead to evolving a shame mechanism or is it perhaps the other way around? Does a sense of shame lead to a societal devaluation of certain behaviors such as dishonesty? It is also possible that the participants in the audience group responded with the concept of "shame" in the back of their mind even though they were not asked to directly comment on how shameful the act was. Perhaps their third-party assessments of how bad the behavior was were clouded by their own perceptions of how shameful the behavior would be if they themselves had engaged in it.
Another limitation of the study is that the participants represented a young subgroup of society. The mean ages of 23 (Israel), 31 (India) and 36 (USA) as well as the use of an online Amazon Mechanical Turk questionnaire means that the study results predominantly reflect the views of Millennials. The similarities of the shame and devaluation scores in three distinct cultures are among the most remarkable findings of these studies. However, perhaps they are more reflective of a global convergence of values among the Millennial generation than an underlying evolutionary conservation of an adaptive mechanism.
These limitations should not detract from the provocative questions raised by the studies. They force us to rethink how we view shame. Like all adaptive defense mechanisms, shame could go awry. Our immune function, for example, is an essential defense mechanism but an unfettered immune response can destroy the very body it is trying to protect. Perhaps shame acts in a similar fashion. A certain level of shame could help us function in society by promoting certain moral values such as justice, honesty or generosity. But an excess of shame may become a maladaptive prison which compromises our individuality.
Daniel Sznycer, John Tooby, Leda Cosmides, Roni Porat, Shaul Shalvi, and Eran Halperin. (2016). "Shame closely tracks the threat of devaluation by others, even across cultures" Proceedings of the National Academy of Sciences
Image Credit: The image of the mask was obtained via Wellcome Images.
Monday, February 08, 2016
Leadership lessons from The Walking Dead - (Donald Trump, take note!)
by Sarah Firisen
We've all known great leaders. People that we’d walk through fire for, but what makes them such great leaders? As the Presidential primary season gets under way, perhaps it’s worth considering what leadership really is. Because despite the inevitable primary bickering over whether a businessman, senator or a governor makes a more effective President, what we’re really looking for is leadership.
Are great leaders born or can these traits be developed? Or is it a combination of the two? People are born with certain natural abilities , but per Malcolm Gladwell’s, Outliers hour rule, it takes about 10,000 hours of practice to achieve mastery (this of course probably goes for most things). So does this mean that with enough conscious effort, anyone can be a great leader? I do think that motivation has a part to play. The key word here is “great”. Someone who wants to lead for reasons outside of personal aggrandizement, outside of pure power for power’s sake. Maybe, a person with those core attributes can work towards achieving mastery.
Having worked in the leadership development field for a number of years now, I would say that “leaders” can be divided into various camps. There’s the “people love me and would follow me to the end of the earth” guy; it’s probably not true if you’re that sure it is. There’s the “I’m tough but fair, and people respect that”; yeah, I bet they don’t. There’s the total asshole who really doesn’t care and thinks that as long as he/she is producing results, his leadership won’t care how dissatisfied his people are. Maybe he/she is right, in the short-term. But how can that be anything but a short-term strategy. You need people with expertise for results. People have choices. People with expertise always have choices. You can only get away with being an asshole for so long. Finally, there’s the “leader” who says: “yes, I’m a total asshole to work for. Don’t care. I pay my people so well that they’ll put up with anything”.
Here’s the problem, there’s copious evidence that, if that ever worked, it’s working less and less well with millennials. They’re not motivated by the same things we were. They want work/life balance and work that provides them with a sense of purpose. Their main driver isn’t money or status. So even if this particular asshole dragged his people behind him in the past, odds are that’s an increasingly losing tactic.
So what do we all look for in a leader? And if you aspire to be a great leader, what traits should you be looking to develop in your 10,000 hours?
Two books that have had an impact on my thinking are “Why should anyone be led by you?” and “the Crucibles of Leadership.” The former by Gareth Jones and the latter by Robert J. Thomas. Jones in both his book and the HBR article that preceded it really challenges the notion that leadership is a right rather than a duty. What makes a person worthy of being followed? What are some of the requisite capabilities and how can those be developed? Anyone can be managed by you, but not anyone can be led by you.
In “The Crucibles of Leadership”, and it’s related HBR article Robert J. Thomas works from the thesis that most great leaders, maybe all great leaders, have gone through a major crucible experience that has changed them, "one of the most reliable indicators and predictors of true leadership is an individual’s ability to find meaning in negative events and to learn from even the most trying circumstances". This experience helped mold them into their best leadership self, “the skills required to conquer adversity and emerge stronger and more committed than ever are the same ones that make for extraordinary leaders.” We all experience major and often traumatic life experiences, what is different about some people is how they navigate their way through the crisis, then learn and grow from the experience. According to Thomas, ”Great leaders don’t see themselves as the victims of their circumstances, but instead accept their reality. Their crucible experience forges them into extraordinary leaders.”
So that’s some of the literature. But now let’s look at an application of some of these theories. One of my favorite TV shows these days is The Walking Dead. Every time I try to get my boyfriend to watch it he gives me this look and says rather dismissively, “I don’t watch shows about zombies”. What I can’t get him to see is that it’s not a show about zombies. I mean, on one level it clearly is; the premise of the show is that a virus has afflicted humanity, turning the dead into zombies. A zombie bite will turn a living person into a zombies, or walkers. There are zombies everywhere. But on another level, it’s not really a show about zombies, it’s a show about people. About how people cope when their lives are turned upside down. When civilization as they’ve known it is in ruins. When lawlessness reigns and survival at all costs is a valid life choice.
Rick Grimes, our hero, was just a small town sheriff's deputy in rural Georgia with a wife and son, an everyman. A couple of episodes in, Rick is already emerging as a natural leader. By season 6, he’s developed into a great leader who people will follow into hell. Why? In all likelihood, if the virus hadn’t broken out, Rick would never have grown into the leader that he is by the current point in the series. He would have just been that guy. That good guy who went to work every day, went to his son’s football practice and tried to live the best life he could.
Rick has various crucible moments, in fact, a case could be made that every episode piles on a new one. But two of his early major ones are when he has to kill his best friend Shane for the good of the group of survivors and when his wife Lori dies. But everyone in The Walking Dead has lost people. Usually many people. Everyone has faced death and committed terrible acts they never would have thought themselves capable of. These experiences harden some, drive others mad. What’s different about Rick?
Well to go back to “Why should anyone be led by you”, Rick exhibits authentic whole leadership. He has great self-awareness, his values are clear to himself and to the people around him. He knows his blind spots, his strengths and weaknesses and he builds a team of people around him to compensate for his weaknesses rather than denying them. Which means he knows and acknowledges the strengths of his people.
Rick has a clarity of vision for himself and for his group. That vision is clearly and firmly articulated: his group knows, he ALWAYS has their back and will never leave anyone behind. In this brutal, lawless, world, that clarity of purpose binds his group together and to him.
Rick’s not the “best man” in the group, that honor has been shared by Glenn and Hershel.
Glenn has never killed a living person. Hershel was until his death the moral compass of the group. But Rick, while he has killed, has a very strict and clear moral code. This is encapsulated in the three questions he asks new potential group members: how many walkers have you killed? How many people have you killed? Why? The question isn’t have you killed, but why have you killed. Asking these questions quickly gives Rick a sense of the choices the person has made and insight into their beliefs and morality
Rick’s not the smartest person in the group and he’s definitely not the best survivor, that’s Darryl. There are people better, tougher people. But there’s never been any real challenge to Rick’s leadership, because there’s more to true leadership than being the best at everything (Trump might reflect on this fact).
It’s often, maybe usually, the case that the most fearsome encounters the group has had have been with other groups of survivors. And these groups always have a leader, because most people need to follow someone. And in all cases, the seeds of the group’s destruction can be found in the flaws of the leader.
The charismatic leader of Woodbury, Georgia, The Governor is a man who, like Rick Grimes led a wholly unremarkable life before the outbreak. He has had his own potential crucibles, his daughter was bitten and became a walker. Losing his daughter made him cold, severe and paranoid. The Governor reveals himself to be a brutal, irrational leader.
While initially, Woodbury seems to be a sanctuary, it quickly becomes clear that The Governor deals with potential threats to his community by executing most newcomers. Finally, after leading his group into a totally unnecessary and unsuccessful ambush, the Governor turns on his own people, slaughtering some, abandoning the rest.
Gareth, is the leader of Terminus, seemingly the ultimate sanctuary, luring people in with the posted signs for miles around, "Sanctuary for all. Community for all. Those who arrive survive.” But actually, it’s a community of cannibals but whose actual motto is "You're the butcher or you're the cattle.“
Gareth claims that there was a time when Terminus was a real sanctuary and he was a good, generous man who was willing to help others to survive. However, after a brutal attack on the community, his crucible, he became a cunning, brutal, cold blooded mass murderer.
Again, he’s another charismatic, intelligent man, who claims to be just taking extreme measures to stay alive and to keep his group alive. But in doing so, he’s lose his humanity, any capacity for empathy he ever had.
Dawn Learner, the leader of a group of police officers residing at Grady Memorial Hospital.
She at least initially seems to have good intentions as she attempts to maintain peace in the brutalized and corrupt system she runs.
She’s strong, pragmatic, focused but stern. But she’s revealed to be the essence of corrupt authority. Whatever safety she offers always comes with a price and she believes that the means always justify the ends, if they’re her ends. Any goodness is a façade masking an obsessive and violent need for control
Deanna, a congresswoman before the outbreak, is the leader of Alexandria, a walled-off community that has been spared much engagement with walkers, for reasons that later become horrifyingly clear. Deanna is a caring, compassionate, insightful woman who is committed to her community.
She shares many of Rick’s best traits and they share an understanding and mutual respect from the beginning. But she encourages her community in the fantasy that they’re safe and resents any attempts by Rick to make them face reality. In the end, this is her community’s downfall when they’re utterly unprepared to face the horrors that inevitably finally catch up with them.
All these “leaders” had some of the necessary traits for great leadership:
They engage others in a shared meaning, even it is a deeply morally flawed shared meaning.
They all have distinctive and compelling voices. None of them are lacking in charisma. Deanna has a very strong sense of integrity and values and Dawn could be said to have adaptive capacity.
But none of them have all of the traits of great leadership. Only Rick does. For him, it’s not survival at all costs, it’s also about helping his people keep their humanity intact. Every other “leader” in this brutal world chooses one or the other, only Rick works to keep them in balance, however challenging that is.
But every leader should strive for continuous improvement. Rick’s default mode up to this point has been reactive to the constant dangers around him. And this makes a lot of sense; there have been a lot of dangers and he’s scared to let his guard down, both personally and for his people,. But at some point, he has to start being more forward looking and strategic.
What does life look like if and when the danger lessens? Life in Alexandria showed that Rick isn’t comfortable standing down. When I was a kid, I was obsessed with the legend of Robin Hood. But I was also intrigued by the notion of what happens to Robin Hood and his Merry Men once good King Richard is back on the throne. Once you’ve spent too much time in reactive, hero mode, it’s often hard to adjust to peace and security.
Perhaps the answer is that Rick’s not the leaders for a future state of the group. Being the leader in crisis is not the same as being the leader for a stable growth. And that’s often recognized by the leader and the group around them. I once worked for the greatest guy in the world who acknowledged that he was great at starting companies, not so great at leading them once they grew to a certain size and stability. And it could be seen as the ultimate sign of great leadership to have that level of self-awareness and knowledge.
But if Rick is to be that future leader, he has to develop into someone who doesn’t just react to the disruption around him, but can lead his group through disruption to a more sustainable future.
Monday, January 04, 2016
We Have Become Exhausted Slaves in a Culture of Positivity
by Jalees Rehman
We live in an era of exhaustion and fatigue, caused by an incessant compulsion to perform. This is one of the central tenets of the book "Müdigkeitsgesellschaft" (translatable as "The Fatigue Society" or "The Tiredness Society") by the German philosopher Byung-Chul Han. Han is a professor at the Berlin Universität der Künste (University of the Arts) and one of the most widely read contemporary philosophers in Germany. He was born in Seoul where he studied metallurgy before he moved to Germany in the 1980s to pursue a career in philosophy. His doctoral thesis and some of his initial work in the 1990s focused on Heidegger but during the past decade, Han has written about broad range of topics regarding contemporary culture and society. "Müdigkeitsgesellschaft" was first published in 2010 and helped him attain a bit of a rock-star status in Germany despite his desire to avoid too much public attention – unlike some of his celebrity philosopher colleagues.
The book starts out with two biomedical metaphors to describe the 20th century and the emerging 21st century. For Han, the 20th century was an "immunological" era. He uses this expression because infections with viruses and bacteria which provoked immune responses were among the leading causes of disease and death and because the emergence of vaccinations and antibiotics helped conquer these threats. He then extends the "immunological" metaphor to political and societal events. Just like the immune system recognizes bacteria and viruses as "foreign" that needs to be eliminated to protect the "self", the World Wars and the Cold War were also characterized by a clear delineation of "Us" versus "Them". The 21stcentury, on the other hand, is a "neuronal" era characterized by neuropsychiatric diseases such as depression, attention deficit hyperactivity disorder (ADHD), burnout syndrome and borderline personality disorder. Unlike the diseases in the immunological era, where there was a clear distinction between the foreign enemy microbes that needed to be eliminated and the self, these "neuronal" diseases make it difficult to assign an enemy status. Who are the "enemies" in burnout syndrome or depression? Our environment? Our employers? Our own life decisions and choices? Are we at war with ourselves in these "neuronal" conditions? According to Han, this biomedical shift in diseases is mirrored by a political shift in a globalized world where it becomes increasingly difficult to define the "self" and the "foreign". We may try to assign a "good guy" and "bad guy" status to navigate our 21st century but we also realize that we are so interconnected that these 20th century approaches are no longer applicable.
The cell biologist in me cringed when I read Han's immunologic and neuronal metaphors. Yes, it is true that successfully combatting infectious diseases constituted major biomedical victories in the 20th century but these battles are far from over. The recent Ebola virus scare, the persistence of malaria resistance, the under-treatment of HIV and the emergence of multi-drug resistant bacteria all indicate that immunology and infectious disease will play central roles in the biomedical enterprise of the 21st century. The view that the immune system clearly distinguishes between "self" and "foreign" is also overly simplistic because it ignores that autoimmune diseases, many of which are on the rise and for which we still have very limited treatment options, are immunological examples of where the "self" destroys itself. Even though I agree that neuroscience will likely be the focus of biomedical research, it seems like an odd choice to select a handful of psychiatric illnesses as representing the 21st century while ignoring major neuronal disorders such as Alzheimer's dementia, stroke or Parkinson's disease. He also conflates specific psychiatric illnesses with the generalized increase in perceived fatigue and exhaustion.
Once we move past these ill- chosen biomedical examples, Han's ideas become quite fascinating. He suggests that the reason why we so often feel exhausted and fatigued is because we are surrounded by a culture of positivity. At work, watching TV at home or surfing the web, we are inundated by not-so-subtle messages of what we can do. Han quotes the example of the "Yes We Can" slogan from the Obama campaign. "Yes We Can" exudes positivity by suggesting that all we need to do is try harder and that there may be no limits to what we could achieve. The same applies to the Nike "Just Do It" slogan and the thousands of self-help books published each year which reinforce the imperative of positive thinking and positive actions.
Here is the crux of Han's thesis. "Yes We Can" sounds like an empowering slogan, indicating our freedom and limitless potential. But according to Han, this is an illusory freedom because the message enclosed within "Yes We Can" is "Yes We Should". Instead of living in a Disziplinargesellschaft(disciplinary society) of the past where our behavior was clearly regulated by societal prohibitions and commandments, we now live in a Leistungsgesellschaft (achievement society) in which we voluntarily succumb to the pressure of achieving. The Leistungsgesellschaft is no less restrictive than the Disziplinargesellschaft. We are no longer subject to exogenous prohibitions but we have internalized the mandates of achievement, always striving to do more. We have become slaves to the culture of positivity, subjugated by the imperative "Yes, We Should". Instead of carefully contemplating whether or not to pursue a goal, the mere knowledge that we could achieve it forces us to strive towards that goal. Buying into the "Yes We Can" culture chains us to a life of self-exploitation and we are blinded by passion and determination until we collapse. Han uses the sad German alliteration "Erschöpfung, Ermüdung und Erstickung" ("exhaustion, fatigue and suffocation") to describe the impact that an excess of positivity has once we forgo our ability to say "No!" to the demands of the achievement society. We keep on going until our minds and bodies shut down and this is why we live in a continuous state of exhaustion and fatigue. Han does not view multitasking as a sign of civilizational progress. Multitasking is an indicator of regression because it results in a broad but rather superficial state of attention and thus prevents true contemplation
It is quite easy for us to relate to Han's ideas at our workplace. Employees with a "can-do" attitude are praised but you will rarely see a plaque awarded to commemorate an employee's "can-contemplate" attitude. In an achievement society, employers no longer have to exploit us because we willingly take on more and more tasks to prove our own self-worth.
While reading Han's book, I was reminded of a passage in Bertrand Russell's essay "In Praise of Idleness" in which he extols the virtues of reducing our workload to just four hours a day:
In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving, however excellent his pictures may be. Young writers will not be obliged to draw attention to themselves by sensational pot-boilers, with a view to acquiring the economic independence needed for monumental works, for which, when the time at last comes, they will have lost the taste and capacity. Men who, in their professional work, have become interested in some phase of economics or government, will be able to develop their ideas without the academic detachment that makes the work of university economists often seem lacking in reality. Medical men will have the time to learn about the progress of medicine, teachers will not be exasperatedly struggling to teach by routine methods things which they learnt in their youth, which may, in the interval, have been proved to be untrue.
Above all, there will be happiness and joy of life, instead of frayed nerves, weariness, and dyspepsia. The work exacted will be enough to make leisure delightful, but not enough to produce exhaustion. Since men will not be tired in their spare time, they will not demand only such amusements as are passive and vapid. At least one per cent will probably devote the time not spent in professional work to pursuits of some public importance, and, since they will not depend upon these pursuits for their livelihood, their originality will be unhampered, and there will be no need to conform to the standards set by elderly pundits. But it is not only in these exceptional cases that the advantages of leisure will appear. Ordinary men and women, having the opportunity of a happy life, will become more kindly and less persecuting and less inclined to view others with suspicion.
While Russell's essay proposes reduction of work hours as a solution, Han's critique of the achievement society and its impact on generalized fatigue and malaise is not limited to our workplace. By accepting the mandate of continuous achievement and hyperactivity, we apply this approach even to our leisure time. Whether it is counting the steps we walk with our fitness activity trackers or competitively racking up museum visits as a tourist, our obsession with achievement permeates all aspects of our lives. Is there a way out of this vicious cycle of excess positivity and persistent exhaustion? We need to be mindful of our right to refuse. Instead of piling on tasks for ourselves during work and leisure we need to recognize the value and strength of saying "No". Han introduces the concept of "heilende Müdigkeit" (healing tiredness), suggesting that there is a form of tiredness that we should welcome because it is an opportunity for rest and regeneration. Weekend days are often viewed as days reserved for chores and leisure tasks that we are unable to pursue during regular workdays. By resurrecting the weekend as the time for actual rest, idleness and contemplation we can escape from the cycle of exhaustion. We have to learn not-doing in a world obsessed with doing.
Note: Müdigkeitsgesellschaft was translated into English in 2015 and is available as "The Burnout Society" by Stanford University Press.
Monday, December 07, 2015
The Dire State of Science in the Muslim World
by Jalees Rehman
Universities and the scientific infrastructures in Muslim-majority countries need to undergo radical reforms if they want to avoid falling by the wayside in a world characterized by major scientific and technological innovations. This is the conclusion reached by Nidhal Guessoum and Athar Osama in their recent commentary "Institutions: Revive universities of the Muslim world", published in the scientific journal Nature. The physics and astronomy professor Guessoum (American University of Sharjah, United Arab Emirates) and Osama, who is the founder of the Muslim World Science Initiative, use the commentary to summarize the key findings of the report "Science at Universities of the Muslim World" (PDF), which was released in October 2015 by a task force of policymakers, academic vice-chancellors, deans, professors and science communicators. This report is one of the most comprehensive analyses of the state of scientific education and research in the 57 countries with a Muslim-majority population, which are members of the Organisation of Islamic Cooperation (OIC).
Here are some of the key findings:
1. Lower scientific productivity in the Muslim world: The 57 Muslim-majority countries constitute 25% of the world's population, yet they only generate 6% of the world's scientific publications and 1.6% of the world's patents.
2. Lower scientific impact of papers published in the OIC countries: Not only are Muslim-majority countries severely under-represented in terms of the numbers of publications, the papers which do get published are cited far less than the papers stemming from non-Muslim countries. One illustrative example is that of Iran and Switzerland. In the 2014 SCImago ranking of publications by country, Iran was the highest-ranked Muslim-majority country with nearly 40,000 publications, just slightly ahead of Switzerland with 38,000 publications - even though Iran's population of 77 million is nearly ten times larger than that of Switzerland. However, the average Swiss publication was more than twice as likely to garner a citation by scientific colleagues than an Iranian publication, thus indicating that the actual scientific impact of research in Switzerland was far greater than that of Iran.
To correct for economic differences between countries that may account for the quality or impact of the scientific work, the analysis also compared selected OIC countries to matched non-Muslim countries with similar per capita Gross Domestic Product (GDP) values (PDF). The per capita GDP in 2010 was $10,136 for Turkey, $8,754 for Malaysia and only $7,390 for South Africa. However, South Africa still outperformed both Turkey and Malaysia in terms of average citations per scientific paper in the years 2006-2015 (Turkey: 5.6; Malaysia: 5.0; South Africa: 9.7).
3. Muslim-majority countries make minimal investments in research and development: The world average for investing in research and development is roughly 1.8% of the GDP. Advanced developed countries invest up to 2-3 percent of their GDP, whereas the average for the OIC countries is only 0.5%, less than a third of the world average! One could perhaps understand why poverty-stricken Muslim countries such as Pakistan do not have the funds to invest in research because their more immediate concerns are to provide basic necessities to the population. However, one of the most dismaying findings of the report is the dismally low rate of research investments made by the members of the Gulf Cooperation Council (GCC, the economic union of six oil-rich gulf countries Saudi Arabia, Kuwait, Bahrain, Oman, United Arab Emirates and Qatar with a mean per capita GDP of over $30,000 which is comparable to that of the European Union). Saudi Arabia and Kuwait, for example, invest less than 0.1% of their GDP in research and development, far lower than the OIC average of 0.5%.
So how does one go about fixing this dire state of science in the Muslim world? Some fixes are rather obvious, such as increasing the investment in scientific research and education, especially in the OIC countries which have the financial means and are currently lagging far behind in terms of how much funds are made available to improve the scientific infrastructures. Guessoum and Athar also highlight the importance of introducing key metrics to assess scientific productivity and the quality of science education. It is not easy to objectively measure scientific and educational impact, and one can argue about the significance or reliability of any given metric. But without any metrics, it will become very difficult for OIC universities to identify problems and weaknesses, build new research and educational programs and reward excellence in research and teaching. There is also a need for reforming the curriculum so that it shifts its focus from lecture-based teaching, which is so prevalent in OIC universities, to inquiry-based teaching in which students learn science hands-on by experimentally testing hypotheses and are encouraged to ask questions.
In addition to these commonsense suggestions, the task force also put forward a rather intriguing proposition to strengthen scientific research and education: place a stronger emphasis on basic liberal arts in science education. I could not agree more because I strongly believe that exposing science students to the arts and humanities plays a key role in fostering the creativity and curiosity required for scientific excellence. Science is a multi-disciplinary enterprise, and scientists can benefit greatly from studying philosophy, history or literature. A course in philosophy, for example, can teach science students to question their basic assumptions about reality and objectivity, encourage them to examine their own biases, challenge authority and understand the importance of doubt and uncertainty, all of which will likely help them become critical thinkers and better scientists.
However, the specific examples provided by Guessoum and Athar do not necessarily indicate a support for this kind of a broad liberal arts education. They mention the example of the newly founded private Habib University in Karachi which mandates that all science and engineering students also take classes in the humanities, including a two semester course in "hikma" or "traditional wisdom". Upon reviewing the details of this philosophy course on the university's website, it seems that the course is a history of Islamic philosophy focused on antiquity and pre-modern texts which date back to the "Golden Age" of Islam. The task force also specifically applauds an online course developed by Ahmed Djebbar. He is an emeritus science historian at the University of Lille in France, which attempts to stimulate scientific curiosity in young pre-university students by relating scientific concepts to great discoveries from the Islamic "Golden Age". My concern is that this is a rather Islamocentric form of liberal arts education. Do students who have spent all their lives growing up in a Muslim society really need to revel in the glories of a bygone era in order to get excited about science? Does the Habib University philosophy course focus on Islamic philosophy because the university feels that students should be more aware of their cultural heritage or are there concerns that exposing students to non-Islamic ideas could cause problems with students, parents, university administrators or other members of society who could perceive this as an attack on Islamic values? If the true purpose of liberal arts education is to expand the minds of students by exposing them to new ideas, wouldn't it make more sense to focus on non-Islamic philosophy? It is definitely not a good idea to coddle Muslim students by adulating the "Golden Age" of Islam or using kid gloves when discussing philosophy in order to avoid offending them.
This leads us to a question that is not directly addressed by Guessoum and Osama: How "liberal" is a liberal arts education in countries with governments and societies that curtail the free expression of ideas? The Saudi blogger Raif Badawi was sentenced to 1,000 lashes and 10 years in prison because of his liberal views that were perceived as an attack on religion. Faculty members at universities in Saudi Arabia who teach liberal arts courses are probably very aware of these occupational hazards. At first glance, professors who teach in the sciences may not seem to be as susceptible to the wrath of religious zealots and authoritarian governments. However, the above-mentioned interdisciplinary nature of science could easily spell trouble for free-thinking professors or students. Comments about evolutionary biology, the ethics of genome editing or discussing research on sexuality could all be construed as a violation of societal and religious norms.
The 2010 study Faculty perceptions of academic freedom at a GCC university surveyed professors at an anonymous GCC university (most likely Qatar University since roughly 25% of the faculty members were Qatari nationals and the authors of the study were based in Qatar) regarding their views of academic freedom. The vast majority of faculty members (Arab and non-Arab) felt that academic freedom was important to them and that their university upheld academic freedom. However, in interviews with individual faculty members, the researchers found that the professors were engaging in self-censorship in order to avoid untoward repercussions. Here are some examples of the comments from the faculty at this GCC University:
"I am fully aware of our culture. So, when I suggest any topic in class, I don't need external censorship except mine."
"Yes. I avoid subjects that are culturally inappropriate."
"Yes, all the time. I avoid all references to Israel or the Jewish people despite their contributions to world culture. I also avoid any kind of questioning of their religious tradition. I do this out of respect."
This latter comment is especially painful for me because one of my heroes who inspired me to become a cell biologist was the Italian Jewish scientist Rita Levi-Montalcini. She revolutionized our understanding of how cells communicate with each other using growth factors. She was also forced to secretly conduct her experiments in her bedroom because the Fascists banned all "non-Aryans" from going to the university laboratory. Would faculty members who teach the discovery of growth factors at this GCC University downplay the role of the Nobel laureate Levi-Montalcini because she was Jewish? We do not know how prevalent this form of self-censorship is in other OIC countries because the research on academic freedom in Muslim-majority countries is understandably scant. Few faculty members would be willing to voice their concerns about government or university censorship and admitting to self-censorship is also not easy.
The task force report on science in the universities of Muslim-majority countries is an important first step towards reforming scientific research and education in the Muslim world. Increasing investments in research and development, using and appropriately acting on carefully selected metrics as well as introducing a core liberal arts curriculum for science students will probably all significantly improve the dire state of science in the Muslim world. However, the reform of the research and education programs needs to also include discussions about the importance of academic freedom. If Muslim societies are serious about nurturing scientific innovation, then they will need to also ensure that scientists, educators and students will be provided with the intellectual freedom that is the cornerstone of scientific creativity.
Guessoum, N., & Osama, A. (2015). Institutions: Revive universities of the Muslim world. Nature, 526(7575), 634-6.
Romanowski, M. H., & Nasser, R. (2010). Faculty perceptions of academic freedom at a GCC university. Prospects, 40(4), 481-497.
Monday, November 09, 2015
Blissful Ignorance: How Environmental Activists Shut Down Molecular Biology Labs in High Schools
by Jalees Rehman
Hearing about the HannoverGEN project made me feel envious and excited. Envious, because I wish my high school had offered the kind of hands-on molecular biology training provided to high school students in Hannover, the capital of the German state of Niedersachsen. Excited, because it reminded me of the joy I felt when I first isolated DNA and ran gels after restriction enzyme digests during my first year of university in Munich. I knew that many of the students at the HannoverGEN high schools would be thrilled by their laboratory experience and pursue careers as biologists or biochemists.
What did HannoverGEN entail? It was an optional pilot program initiated and funded by the state government of Niedersachsen at four high schools. Students enrolled in the HannoverGEN classes would learn to use molecular biology tools that are typically reserved for college-level or graduate school courses to study plant genetics. Some of the basic experiments involved isolating DNA from cabbage or how bacteria transfer genes to plants, more advanced experiments enabled the students to analyze whether or not the genome of a provided maize sample was genetically modified. Each experimental unit was accompanied by relevant theoretical instruction on the molecular mechanisms of gene expression and biotechnology as well as ethical discussions regarding the benefits and risks of generating genetically modified organisms ("GMOs"). You can only check out the details of the HannoverGEN program in the Wayback Machine Internet archive because the award-winning educational program and the associated website were shut down in 2013 at the behest of German anti-GMO activist groups, environmental activists, Greenpeace, the Niedersachsen Green Party and the German organic food industry.
Why did these activists and organic food industry lobbyists oppose a government-funded educational program which improved the molecular biology knowledge and expertise of high school students? A press release entitled "Keine Akzeptanzbeschaffung für Agro-Gentechnik an Schulen!" ("No Acceptance for Agricultural Gene Technology at Schools") in 2012 by an alliance representing farmers growing natural or organic crops accompanied by the publication of a study with the same title (PDF), funded by this group as well as its anti-GMO partners, gives us some clues. They feared that the high school students might become too accepting of using biotechnology in agriculture and that the curriculum did not sufficiently highlight all the potential dangers of GMOs. By allowing the ethical discussions that were part of the HannoverGEN curriculum to not only discuss the risks but also mention the benefits of genetically modifying crops, students might walk away with the idea that GMOs may be a good thing. Taxpayer money should not be used to foster special interests such as those of the agricultural industry that may want to use GMOs, according to this group.
A response by the University of Hannover (PDF) which had helped develop the curriculum and coordinated the classes for the high school students carefully dissected the complaints of the anti-GMO activists. The author of the "study" with the polemic title that criticized HannoverGEN for being too biased had not visited the HannoverGEN laboratories, nor had he had interviewed the biology teachers or students enrolled in the classes. In fact, his critique was based on weblinks that were not even used by the HannoverGEN teachers or students and his study ignored the fact that discussing potential risks of genetic modification was a core curriculum topic in all the classes.
Unfortunately, this shoddily prepared "study" had a significant impact, in part because it was widely promoted by partner organizations. Its release in the autumn of 2012 came at an opportune time because Niedersachsen was about to have an election and campaigning against GMOs – which apparently included an educational program that would equip students to form a balanced view of GMO technology - seemed like a perfect cause for the Green Party. When the Social Democrats and the Green Party formed a coalition after winning the election in early 2013, nixing the HannoverGEN high school program was formally included in the so-called coalition contract. This is a document in which coalition partners outline the key goals for the upcoming four year period. When one considers how many major issues and problems the government of a large German state has to face – healthcare, education, unemployment, etc. – it is mindboggling that defunding a program involving only four high schools receives so much attention that it needs to be anchored in the coalition contract. In fact, it is a testimony to the influence and zeal of the anti-GMO lobby.
Once the cancellation of HannoverGEN was announced, the Hannover branch of Greenpeace also took credit for campaigning against this high school program and celebrated its victory. A Greenpeace anti-GMO activist also highlighted that he felt the program was too cost intensive because equipping high school laboratories with state-of-the-art molecular biology equipment had already cost more than 1 million Euros and that the previous center-right government which had initiated the HannoverGEN project was planning on expanding the program to even more high schools, thus wasting more taxpayer money.
The scientific community was shaken up by the decision of the new Social Democrat-Green government in Niedersachsen. This was an attack on the academic freedom of schools under the guise of accusing them of promoting special interests while ignoring that the anti-GMO activists themselves were representing special interests, including the lucrative organic food industry. Scientists and science writers such as Martin Ballaschk or Lars Fischer wrote excellent critical articles in which they asked how squashing high-quality, hand-on science programs could ever lead to better decision-making. How could ignorant students have a better grasp of GMO risks and benefits than those who receive formal education and could make truly informed decisions? Sadly, this outcry did not make much of a difference and it did not seem that the media felt this was much of a cause to fight for. I wonder if the media response would have been just as lackluster if the government had de-funded a hands-on science lab to study the effects of climate change.
In 2014, the government of Niedersachsen then announced that they would resurrect an advanced biology laboratory program for high schools with the generic and vague title "Life Science Lab". By removing the word "Gen" from its title and also removing any discussion of GMOs in the curriculum, this new program would leave students in the dark about GMOs. One could thus avoid a scenario in which high school students might learn about benefits of GMOs. Ignorance is bliss from an anti-GMO activist perspective because the void of ignorance can be filled with fear.
From the very first day that I could vote in Germany during the federal election of 1990, I always viewed the Green Party as a party that represented my generation. A party of progressive ideas, concerned about our environment and social causes. However, the HannoverGEN incident is just one example of how the Green Party is caving in to ideologies thus losing its open-mindedness and progressive nature. In the United States, the anti-science movement, which attacks teaching climate change science or evolutionary biology at schools, tends to be rooted in the right wing political spectrum. Right wingers or libertarians are the ones who always complain about taxpayer dollars being wasted and used to promote agendas in schools and universities. But we should not forget that there is also a different anti-science movement rooted in the leftist and pro-environmental political spectrum – not just in Germany.
I worry about all anti-science movements, especially those which attack science education. There is nothing wrong with questioning special interests and ensuring that school and university science curricula are truly balanced. But they need to be balanced and founded on scientific principles, not on political ideologies. Science education has a natural bias – it is biased towards knowledge that is backed up by scientific evidence. We can hypothetically discuss dangers of GMOs but the science behind the dangers of GMO crops is very questionable. Just like environmental activists and leftists agree with us scientists that we do not need to give climate change deniers and creationists “balanced” treatment in our science curricula, they should also accept that much of the "anti-GMO science" is currently more based on ideology than on actual scientific data. Our job is to provide excellent science education so that our students can critically analyze and understand scientific research, independent of whether or not it supports our personal ideologies.
Monday, October 12, 2015
Feel Our Pain: Empathy and Moral Behavior
by Jalees Rehman
"It's empathy that makes us help other people. It's empathy that makes us moral." The economist Paul Zak casually makes this comment in his widely watched TED talk about the hormone oxytocin, which he dubs the "moral molecule". Zak quotes a number of behavioral studies to support his claim that oxytocin increases empathy and trust, which in turn increases moral behavior. If all humans regularly inhaled a few puffs of oxytocin through a nasal spray, we could become more compassionate and caring. It sounds too good to be true. And recent research now suggests that this overly simplistic view of oxytocin, empathy and morality is indeed too good to be true.
Many scientific studies support the idea that oxytocin is a major biological mechanism underlying the emotions of empathy and the formation of bonds between humans. However, inferring that these oxytocin effects in turn make us more moral is a much more controversial statement. In 2011, the researcher Carsten De Dreu and his colleagues at the University of Amsterdam in the Netherlands published the study Oxytocin promotes human ethnocentrism which studied indigenous Dutch male study subjects who in a blinded fashion self-administered either nasal oxytocin or a placebo spray. The subjects then answered questions and performed word association tasks after seeing photographic images of Dutch males (the "in-group") or images of Arabs and Germans, the "out-group" because prior surveys had shown that the Dutch public has negative views of both Arabs/Muslims and Germans. To ensure that the subjects understood the distinct ethnic backgrounds of the target people shown in the images, they were referred to typical Dutch male names, German names (such as Markus and Helmut) or Arab names (such as Ahmed and Youssef).
Oxytocin increased favorable views and word associations but only towards in-group images of fellow Dutch males. The oxytocin treatment even had the unexpected effect of worsening the views regarding Arabs and Germans but this latter effect was not quite statistically significant. Far from being a "moral molecule", oxytocin may actually increase ethnic bias in society because it selectively enhances certain emotional bonds. In a subsequent study, De Dreu then addressed another aspect of the purported link between oxytocin and morality by testing the honesty of subjects. The study Oxytocin promotes group-serving dishonesty showed that oxytocin increased cheating in study subjects if they were under the impression that dishonesty would benefit their group. De Dreu concluded that oxytocin does make us less selfish and care more about the interest of the group we belong to.
These recent oxytocin studies not only question the "moral molecule" status of oxytocin but raise the even broader question of whether more empathy necessarily leads to increased moral behavior, independent of whether or not it is related to oxytocin. The researchers Jean Decety and Jason Cowell at the University of Chicago recently analyzed the scientific literature on the link between empathy and morality in their commentary Friends or Foes: Is Empathy Necessary for Moral Behavior?, and find that the relationship is far more complicated than one would surmise. Judges, police officers and doctors who exhibit great empathy by sharing in the emotional upheaval experienced by the oppressed, persecuted and severely ill always end up making the right moral choices – in Hollywood movies. But empathy in the real world is a multi-faceted phenomenon and we use this term loosely, as Decety and Cowell point out, without clarifying which aspect of empathy we are referring to.
Decety and Cowell distinguish at least three distinct aspects of empathy:
1. Emotional sharing, which refers to how one's emotions respond to the emotions of those around us. Empathy enables us to "feel" the pain of others and this phenomenon of emotional sharing is also commonly observed in non-human animals such as birds or mice.
2. Empathic concern, which describes how we care for the welfare of others. Whereas emotional sharing refers to how we experience the emotions of others, empathic concern motivates us to take actions that will improve their welfare. As with emotional sharing, empathic concern is not only present in humans but also conserved among many non-human species and likely constitutes a major evolutionary advantage.
3. Perspective taking, which - according to Decety and Cowell - is the ability to put oneself into the mind of another and thus imagine what they might be thinking or feeling. This is a more cognitive dimension of empathy and essential for our ability to interact with fellow human beings. Even if we cannot experience the pain of others, we may still be able to understand or envision how they might be feeling. One of the key features of psychopaths is their inability to experience the emotions of others. However, this does not necessarily mean that psychopaths are unable to cognitively imagine what others are thinking. Instead of labeling psychopaths as having no empathy, it is probably more appropriate to specifically characterize them as having a reduced capacity to share in the emotions while maintaining an intact capacity for perspective-taking.
In addition to the complexity of what we call "empathy", we need to also understand that empathy is usually directed towards specific individuals and groups. De Dreu's studies demonstrated that oxytocin can make us more pro-social as long as it benefits those who we feel belong to our group but not necessarily those outside of our group. The study Do you feel my pain? Racial group membership modulates empathic neural responses by Xu and colleagues at Peking University used fMRI brain imaging in Chinese and Caucasian study subjects and measured their neural responses to watching painful images. The study subjects were shown images of either a Chinese or a Caucasian face. In the control condition, the depicted image showed a face being poked with a cotton swab. In the pain condition, study subjects were shown a face of a person being poked with a needle attached to syringe. When the researchers measured the neural responses with the fMRI, they found significant activation in the anterior cingulate cortex (ACC) which is part of the neural pain circuit, both for pain we experience ourselves but also for empathic pain we experience when we see others in pain. The key finding in Xu's study was that ACC activation in response to seeing the painful image was much more profound when the study subject and the person shown in the painful image belonged to the same race.
As we realize that the neural circuits and hormones which form the biological basis of our empathy responses are so easily swayed by group membership then it becomes apparent why increased empathy does not necessarily result in behavior consistent with moral principles. In his essay "Against Empathy", the psychologist Paul Bloom also opposes the view that empathy should form the basis of morality and that we should unquestioningly elevate empathy to virtue for all:
"But we know that a high level of empathy does not make one a good person and that a low level does not make one a bad person. Being a good person likely is more related to distanced feelings of compassion and kindness, along with intelligence, self-control, and a sense of justice. Being a bad person has more to do with a lack of regard for others and an inability to control one's appetites."
I do not think that we can dismiss empathy as a factor in our moral decision-making. Bloom makes a good case for distanced compassion and kindness that does not arise from the more visceral emotion of empathy. But when we see fellow humans and animals in pain, then our initial biological responses are guided by empathy and anger, not the more abstract concept of distanced compassion. What we need is a better scientific and philosophical understanding of what empathy is. Empathic perspective-taking may be a far more robust and reliable guide for moral decision-making than empathic emotions. Current scientific studies on empathy often measure it as an aggregate measure without teasing out the various components of empathy. They also tend to underestimate that the relative contributions of the empathy components (emotion, concern, perspective-taking) can vary widely among cultures and age groups. We need to replace overly simplistic notions such as oxytocin = moral molecule or empathy = good with a more refined view of the complex morality-empathy relationship guided by rigorous science and philosophy.
De Dreu, C. K., Greer, L. L., Van Kleef, G. A., Shalvi, S., & Handgraaf, M. J. (2011). Oxytocin promotes human ethnocentrism. Proceedings of the National Academy of Sciences, 108(4), 1262-1266.
Decety, J., & Cowell, J. M. (2014). Friends or Foes: Is Empathy Necessary for Moral Behavior?. Perspectives on Psychological Science, 9(5), 525-537.
Shalvi, S., & De Dreu, C. K. (2014). Oxytocin promotes group-serving dishonesty. Proceedings of the National Academy of Sciences, 111(15), 5503-5507.
Xu, X., Zuo, X., Wang, X., & Han, S. (2009). Do you feel my pain? Racial group membership modulates empathic neural responses. The Journal of Neuroscience, 29(26), 8525-8529.
Monday, July 20, 2015
"We are at home with situations of legal ambiguity.
And we create flexibility, in situations where it is required."
Consider a few hastily conceived scenarios from the near future. An android charged with performing elder care must deal with an uncooperative patient. A driverless car carrying passengers must decide between suddenly stopping, and causing a pile-up behind it. A robot responding to a collapsed building must choose between two people to save. The question that unifies these scenarios is not just about how to make the correct decision, but more fundamentally, how to treat the entities involved. Is it possible for a machine to be treated as an ethical subject – and, by extension, that an artifical entity may possess "robot rights"?
Of course, "robot rights" is a crude phrase that shoots us straight into a brambly thicket of anthropomorphisms; let's not quite go there yet. Perhaps it's more accurate to ask if a machine – something that people have designed, manufactured and deployed into the world – can have some sort of moral or ethical standing, whether as an agent or as a recipient of some action. What's really at stake here is the contention that a machine can act sufficiently independently in the world that it can be held responsible for its actions and, conversely, if a machine has any sort of standing such that, if it were harmed in any way, this standing would serve to protect its ongoing place and function in society.
You could, of course, dismiss all this as a bunch of nonsense: that machines are made by us exclusively for our use, and anything a robot or computer or AI does or does not do is the responsibility of its human owners. You don't sue the scalpel, rather you sue the surgeon. You don't take a database to court, but the corporation that built it – and in any case you are probably not concerned with the database itself, but with the consequence of how it was used, or maintained, or what have you. As far as the technology goes, if it's behaving badly you shut it off, wipe the drive, or throw it in the garbage, and that's the end of the story.
This is not an unreasonable point of departure, and is rooted in what's known as the instrumentalist view of technology. For an instrumentalist, technology is still only an extension of ourselves and does not possess any autonomy. But how do you control for the sort of complexity for which we are now designing our machines? Our instrumentalist proclivities whisper to us that there must be an elegant way of doing so. So let's begin with a first attempt to do so: Isaac Asimov's Three Laws of Robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Some time later, Asimov added a fourth, which was intended to precede all the others, so it's really the ‘Zeroth' Law:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Laws, which made their first appearance in a 1942 story that is, fittingly enough, set in 2015, are what is known as a deontology: an ethical system expressed as an axiomatic system. Basically, deontology provides the ethical ground for all further belief and action: the Ten Commandments are a classic example. But the difficulties with deontology become apparent when one examines the assumptions inherent in each axiom. For example, the First Commandment states, "Thou shalt have no other gods before me". Clearly, Yahweh is not saying that there are no other gods, but rather that any other gods must take a back seat to him, at least as far as the Israelites are concerned. The corollary is that non-Israelites can have whatever gods they like. Nevertheless, most adherents to Judeo-Christian theology would be loathe to admit the possibilities of polytheism. It takes a lot of effort to keep all those other gods at bay, especially if you're not an Israelite – it's much easier if there is only one. But you can't make that claim without fundamentally reinterpreting that crucial first axiom.
Asimov's axioms can be similarly poked and prodded. Most obviously, we have the presumption of perfect knowledge. How would a robot (or AI or whatever) know if an action was harmful or not? A human might scheme to split actions that are by themselves harmless across several artificial entities, which are subsequently combined to produce harmful consequences. Sometimes knowledge is impossible for both humans and robots: if we look at the case of a stock-trading AI, there is uncertainty whether a stock trade is harmful to another human being or not. If the AI makes a profitable trade, does the other side lose money, and if so, does this constitute harm? How can the machine know if the entity on the other side is in fact losing money? Would it matter if that other entity were another machine and not a human? But don't machines ultimately represent humans in any case?
Better yet, consider a real life example:
A commercial toy robot called Nao was programmed to remind people to take medicine.
"On the face of it, this sounds simple," says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. "But even in this kind of limited task, there are nontrivial ethics questions involved." For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
In this case, the Hippocratic ‘do no harm' has to be balanced against a more utilitarian ‘do some good'. Assuming it could, does the robot force the patient to take the medicine? Wouldn't that constitute potential harm (ie, the possibility that the robot hurts the patient in the act)? Would that harm be greater than not taking the medicine, just this once? What about tomorrow? If we are designing machines to interact with us in such profound and nuanced ways, those machines are already ethical subjects. Our recognition of them as such is already playing catch-up with the facts on the ground.
As implied with the stock trading example, another deontological shortcoming is in the definitions themselves: what's a robot, and what's a human? As robots become more human-like, and humans become more engineered, the line will become blurry. And in many cases, a robot will have to make a snap judgment. What's binary for "quo vadis", and what do you do with a lying human? Because humans lie for the strangest reasons.
Finally, the kind of world that Asimov's laws presupposes is one where robots run around among humans. It's a very specific sort of embodiment. In fact, it is a sort of Slavery 2.0, where robots clearly function for the benefit and in the service of humanity. The Laws are meant to facilitate a very material cohabitation, whereas the kind of broadly distributed, virtually placeless machine intelligence that we are currently developing by leveraging the Internet is much more slippery, and resembles the AI of Spike Jonze's ‘Her'. How do you tell things apart in such a dematerialized world?
The final nail in Asimov's deontological coffin is the assumption of ‘hard-wiring'. That is, Asimov claims that the Laws would be a non-negotiable part of the basic architecture of all robots. But it is wiser to prepare for the exact opposite: the idea that any machine of sufficient intelligence will be able to reprogram itself. The reasons why are pretty irrelevant – it doesn't have to be some variant of SkyNet suddenly deciding to destroy humanity. It may just sit there and not do anything. It may disappear, as the AIs did in ‘Her'. Or, as in William Gibson's Neuromancer, it may just want to become more of itself, and decide what to do with that later on. Gibson never really tells us why the two AIs – that function as the true protagonists of the novel – even wanted to do what they did.
This last thought indicates a fundamental marker in the machine ethics debate. A real difference is developing itself here, and that is the notion of inscrutability. In order for the stance of instrumentality to hold up, you need a fairly straight line of causality. I saw this guy on the beach, I pulled the trigger, and now the guy is dead. It may be perplexing, I may not be sure why I pulled the trigger at that moment, but the chain of events is clear, and there is a system in place to handle it, however problematic. On the other hand, how or why a machine comes to a conclusion or engages in a course of action may be beyond our scope to determine. I know this sounds a bit odd, since after all we built the things. But a record of a machine's internal decisionmaking would have to be a deliberate part of its architecture, and this is expensive and perhaps not commensurate with the agenda of its designers: for example, Diebold made both ATMs and voting machines. Only the former provided receipts, making it fairly easy to theoretically steal an election.
If Congress is willing to condone digitally supervised elections without paper trails, imagine how far away we are from the possibility of regulating the Wild West of machine intelligence. And in fact AIs are being designed to produce results without any regard for how they get to a particular conclusion. One such deliberately opaque AI is Rita, mentioned in a previous essay. Rita's remit is to deliver state-of-the-art video compression technology, but how it arrives at its conclusions is immaterial to the fact that it manages to get there. In the comments to that piece, a friend added that "it is a regular occurrence here at Google where we try to figure out what our machine learning systems are doing and why. We provide them input and study the outputs, but the internals are now an inscrutable black box. Hard to tell if that's a sign of the future or an intermediate point along the way."
Nevertheless, we can try to hold on to the instrumentalist posture and maintain that a machine's black box nature still does not merit the treatment accorded to an ethical subject; that it is still the results or consequences that count, and that the owners of the machine retain ultimate responsibility for it, whether or not they understand it. Well, who are the owners, then?
Of course, ethics truly manifests itself in society via the law. And the law is a generally reactive entity. In the Anglo-American case law tradition, laws, codes and statutes are passed or modified (and less often, repealed) only after bad things happen, and usually only in response to those specific bad things. More importantly for the present discussion, recent history shows that the law (or to be more precise, the people who draft, pass and enforce it) has not been nearly as eager to punish the actions of collectives and institutions as it has been to pursue individuals. Exhibit A in this regard is the number of banks found guilty of vast criminality following the 2008 financial crisis and, by corollary, the number bankers thrown in jail for same. Part of the reason for this is the way that the law already treats non-human entities. I am reminded of Mitt Romney on the Presidential campaign trail a few years ago, benignly musing that "corporations are people, my friend".
Corporate personhood is a complex topic but at its most essential it is a great way to offload risk. Sometimes this makes sense – entrepreneurs can try new ideas and go bankrupt but not lose their homes and possessions. Other times, as with the Citizens United decision, the results can be grotesque and impactful in equal measure. But we ought to look to the legal history of corporate personhood as a possible test case for how machines may become ethical subjects in the eyes of the law. Not only that, but corporations will likely be the owners of these ethical subjects – from a legal point of view, they will look to craft the legal representation of machines as much to their advantage as possible. To not be too cynical about it, I would imagine this would involve minimal liability and maximum profit. This is something I have not yet seen discussed in machine ethics circles, where the concern seems to be more about the instantiation of ethics within the machines themselves, or in highly localized human-machine interactions. Nevertheless, the transformation of the ethical machine-subject into the legislated machine-subject – put differently, the machines as subjects of a legislative gaze – will be of incredibly far-reaching consequence. It will all be in the fine print, and I daresay deliberately difficult to parse. When that day comes, I will be sure to hire an AI to help me make sense of it all.
How Viruses Feign Death to Survive and Thrive
by Jalees Rehman
Billions of cells die each day in the human body in a process called "apoptosis" or "programmed cell death". When cells encounter stress such as inflammation, toxins or pollutants, they initiate an internal repair program which gets rid of the damaged proteins and DNA molecules. But if the damage exceeds their capacity for repair then cells are forced to activate the apoptosis program. Apoptotic cells do not suddenly die and vanish, instead they execute a well-coordinated series of molecular and cellular signals which result in a gradual disintegration of the cell over a period of several hours.
What happens to the cellular debris that is generated when a cell dies via apoptosis? It consists of fragmented cellular compartments, proteins, fat molecules that are released from the cellular corpse. This "trash" could cause even more damage to neighboring cells because it exposes them to molecules that normally reside inside a cell and could trigger harmful reactions on the outside. Other cells therefore have to clean up the mess as soon as possible. Macrophages are cells which act as professional garbage collectors and patrol our tissues, on the look-out for dead cells and cellular debris. The remains of the apoptotic cell act as an "Eat me!" signal to which macrophages respond by engulfing and gobbling up the debris ("phagocytosis") before it can cause any further harm. Macrophages aren't always around to clean up the debris which is why other cells such as fibroblasts or epithelial cells can act as non-professional phagocytes and also ingest the dead cell's remains. Nobody likes to be surrounded by trash.
Clearance of apoptotic cells and their remains is thus crucial to maintain the health and function of a tissue. Conversely, if phagocytosis is inhibited or prevented, then the lingering debris can activate inflammatory signals and cause disease. Multiple autoimmune diseases, lung diseases and even neurologic diseases such as Alzheimer's disease are associated with reduced clearance. The cause and effect relationship is not always clear because these diseases can promote cell death. Are the diseases just killing so many cells that the phagocytosis capacity is overwhelmed, does the debris actually promote the diseased state, or is it a bit of both, resulting in a vicious cycle of apoptotic debris resulting in more cell death and more trash buildup? Researchers are currently investigating whether specifically tweaking phagocytosis could be used as a novel way to treat diseases with impaired clearance of debris.
During the past decade, multiple groups of researchers have come across a fascinating phenomenon by which viruses hijack the phagocytosis process in order to thrive. One of the "Eat Me!" signals for phagocytes is that debris derived from an apoptotic cell is coated by a membrane enriched with phosphatidylserines which are negatively charged molecules. Phosphatidylserines are present in all cells but they are usually tucked away on the inside of cells and are not seen by other cells. When a cell undergoes apoptosis, phosphatidylserines are flipped inside out. When particles or cell fragments present high levels of phosphatidylserines on their outer membranes then a phagocyte knows that it is encountering the remains of a formerly functioning cell that needs to be cleared by phagocytosis.
However, it turns out that not all membranes rich in phosphatidylserines are remains of apoptotic cells. Recent research studies suggest that certain viruses invade cells, replicate within the cell and when they exit their diseased host cell, they cloak themselves in membranes rich in phosphatidylserines. How the viruses precisely appropriate the phosphatidylserines of a cell that is not yet apoptotic and then adorn their viral membranes with the cell's "Eat Me!" signal is not yet fully understood and a very exciting area of research at the interface of virology, immunology and the biology of cell death.
What happens when the newly synthesized viral particles leave the infected cell? Because these viral particles are coated in phosphatidylserine, professional phagocytes such as macrophages or non-professional phagocytes such as fibroblasts or epithelial cells will assume they are encountering phosphatidylserine-rich dead cell debris and ingest it in their roles as diligent garbage collectors. This ingestion of the viral particles has at least two great benefits for the virus: First and foremost, it allows the virus entry into a new host cell which it can then convert into another virus-producing factory. Entering cells usually requires specific receptors by which viruses gain access to selected cell types. This is why many viruses can only infect certain cell types because not all cells have the receptors that allow for viral entry. However, when viruses hijack the apoptotic debris phagocytosis mechanism then the phagocytic cell is "inviting" the viral particle inside, assuming that it is just dead debris. But there is perhaps an even more insidious advantage for the virus. During clearance of apoptotic cells, certain immune pathways are suppressed by the phagocytes in order to pre-emptively dampen excessive inflammation that might be caused by the debris. It is therefore possible that by pretending to be fragments of dead cells, viruses coated with phosphatidylserines may also suppress the immune response of the infected host, thus evading detection and destruction by the immune systems.
Viruses for which this process of apoptotic mimicry has been described include the deadly Ebola virus or the Dengue virus, each using its own mechanism to create its fake mask of death. The Ebola virus buds directly from the fat-rich outer membrane of the infected host cell in the form of elongated, thread-like particles coated with the cell's phosphatidylserines. The Dengue virus, on the other hand, is synthesized and packaged inside the cell and appears to purloin the cell's phosphatidylserines during its synthesis long before it even reaches the cell's outer membrane. As of now, it appears that viruses from at least nine distinct families of viruses use the apoptotic mimicry strategy but the research on apoptotic mimicry is still fairly new and it is likely that scientists will discover many more viruses which rely on this and similar evolutionary strategies to evade the infected host's immune response and spread throughout the body.
Uncovering the phenomenon of apoptotic mimicry gives new hope in the battle against viruses for which we have few targeted treatments. In order to develop feasible therapies, it is important to precisely understand the molecular mechanisms by which the hijacking occurs. One cannot block all apoptotic clearance in the body because that would have disastrous consequences due to the buildup of legitimate apoptotic debris that needs to be cleared. However, once scientists understand how viruses concentrate phosphatidylserines or other "Eat Me!" signals in their membranes, it may be possible to specifically uncloak these renegade viruses without compromising the much needed clearance of conventional cell debris.
Elliott, M. R. and Ravichandran, K.S. "Clearance of apoptotic cells: implications in health and disease" The Journal of Cell Biology 189.7 (2010): 1059-1070.
Amara, A and Mercer, J. "Viral apoptotic mimicry." Nature Reviews Microbiology (2015).
Monday, June 22, 2015
The Long Shadow of Nazi Indoctrination: Persistence of Anti-Semitism in Germany
by Jalees Rehman
Anti-Semitism and the holocaust are among the central themes in the modern German secondary school curriculum. During history lessons in middle school, we learned about anti-Semitism and the persecution of Jews in Europe during the middle ages and early modernity. Our history curriculum in the ninth and tenth grades focused on the virulent growth of anti-Semitism in 20th century Europe, how Hitler and the Nazi party used anti-Semitism as a means to rally support and gain power, and how the Nazi apparatus implemented the systematic genocide of millions of Jews.
In grades 11 to 13, the educational focus shifts to a discussion of the broader moral and political context of anti-Semitism and Nazism. How could the Nazis enlist the active and passive help of millions of "upstanding" citizens to participate in this devastating genocide? Were all Germans who did not actively resist the Nazis morally culpable or at least morally responsible for the Nazi horrors? Did Germans born after the Second World War inherit some degree of moral responsibility for the crimes committed by the Nazis? How can German society ever redeem itself after being party to the atrocities of the Nazis? Anti-Semitism and Nazism were also important topics in our German literature and art classes because the Nazis persecuted and murdered German Jewish intellectuals and artists, and because the shame and guilt experienced by Germans after 1945 featured so prominently in German art and literature.
One purpose of extensively educating Germany school-children about this dark and shameful period of German history is the hope that if they are ever faced with the reemergence of prejudice directed against Jews or any other ethnic or religious group, they will have the courage to stand up for those who are being persecuted and make the right moral choices. As such, it is part of the broader Vergangenheitsbewältigung (wrestling with one's past) in post-war German society which takes place not only in schools but in various public venues. The good news, according to recent research published in the Proceedings of the National Academy of Sciences by Nico Voigtländer and Hans-Joachim Voth, is that Germans who attended school after the Second World War have shown a steady decline in anti-Semitism. The bad news: Vergangenheitsbewältigung is a bigger challenge for Germans who attended school under the Nazis because a significant proportion of them continue to exhibit high levels of anti-Semitic attitudes more than half a century after the defeat of Nazi Germany.
Voigtländer and Voth examined the results of the large General Social Survey for Germany (ALLBUS) in which several thousand Germans were asked about their values and beliefs. The survey took place in 1996 and 2006, and the researchers combined the results of both surveys with a total of 5,300 participants from 264 German towns and cities. The researchers were specifically interested in anti-Semitic attitudes and focused on three survey questions specifically related to anti-Semitism. Survey participants were asked to respond on a scale of 1 to 7 and indicate whether they thought Jews had too much influence in the world, whether Jews were responsible for their own persecution and whether Jews should have equal rights. The researchers categorized participants as "committed anti-Semites" if they revealed anti-Semitic attitudes to all three questions. The overall rate of committed anti-Semites was 4% in Germany but there was significant variation depending on the geographical region and the age of the participants.
Germans born in the 1970s and 1980s had only 2%-3% committed anti-Semites whereas the rate was nearly double for Germans born in the 1920s (6%). However, the researchers noted one exception: Germans born in the 1930s. Those citizens had the highest fraction of anti-Semites: 10%. The surveys were conducted in 1996 and 2006 when the participants born in in the 1930s were 60-75 years old. In other words, one out of ten Germans of that generation did not think that Jews deserved equal rights!
The researchers attributed this to the fact that people born in the 1930s were exposed to the full force of systematic Nazi indoctrination with anti-Semitic views which started as early as in elementary school and also took place during extracurricular activities such as the Hitler Youth programs. The Nazis came to power in 1933 and immediately began implementing a whole-scale propaganda program in all schools. A child born in 1932, for example, would have attended elementary school and middle school as well as Hitler Youth programs from age six onwards till the end of the war in 1945 and become inculcated with anti-Semitic propaganda.
The researchers also found that the large geographic variation in anti-Semitic prejudices today was in part due to the pre-Nazi history of anti-Semitism in any given town. The Nazis were not the only and not the first openly anti-Semitic political movement in Germany. There were German political parties with primarily anti-Jewish agendas which ran for election in the late 19th century and early 20th century. Voigtländer and Voth analyzed the votes that these anti-Semitic parties received more than a century ago, from 1890 to 1912. Towns and cities with the highest support for anti-Semitic parties in this pre-Nazi era are also the ones with the highest levels of anti-Semitic prejudice today. When children were exposed to anti-Semitic indoctrination in schools under the Nazis, the success of these hateful messages depended on how "fertile" the ground was. If the children were growing up in towns and cities where family members or public figures had supported anti-Jewish agenda during prior decades then there was a much greater likelihood that the children would internalize the Nazi propaganda. The researchers cite the memoir of the former Hitler Youth member Alfons Heck:
"We who were born into Nazism never had a chance unless our parents were brave enough to resist the tide and transmit their opposition to their children. There were few of those."
- Alfons Heck in "The Burden of Hitler's Legacy"
The researchers then address the puzzling low levels of anti-Semitic prejudices among Germans born in the 1920s. If the theory of the researcher were correct that anti-Semitic prejudices persist today because Nazi school indoctrination then why aren't Germans born in the 1920s more anti-Semitic? A child born in 1925 would have been exposed to Nazi propaganda throughout secondary school. Oddly enough, women born in the 1920s did show high levels of anti-Semitism when surveyed in 1996 and 2006 but men did not. Voigtländer and Voth solve this mystery by reviewing wartime fatality rates. The most zealous male Nazi supporters with strong anti-Semitic prejudices were more likely to volunteer for the Waffen-SS, the military wing of the Nazi party. Some SS divisions had an average age of 18 and these SS-divisions had some of the highest fatality rates. This means that German men born in the 1920s weren't somehow immune to Nazi propaganda. Instead, most of them perished because they bought into it and this is why we now see lower levels of anti-Semitism than expected in Germans born during that decade.
A major limitation of this study is its correlational nature and the lack of data on individual exposure to Nazism. The researchers base their conclusions on birth years and historical votes for anti-Semitic parties of towns but did not track how much individuals were exposed to anti-Semitic propaganda in their schools or their families. Such a correlational study cannot establish a cause-effect relationship between propaganda and the persistence of prejudice today. One factor not considered by the researchers, for example, is that Germans born in the 1930s are also among those who grew up as children in post-war Germany, often under conditions of extreme poverty and even starvation.
Even without being able to establish a clear cause-effect relationship, the findings of the study raise important questions about the long-term effects of racial propaganda. It appears that a decade of indoctrination may give rise to a lifetime of hatred. Our world continues to be plagued by prejudice against fellow humans based on their race or ethnicity, religion, political views, gender or sexual orientation. Children today are not subject to the systematic indoctrination implemented by the Nazis but they are probably still exposed to more subtle forms of prejudice and we do not know much about its long-term effects. We need to recognize the important role of public education in shaping the moral character of individuals and ensure that our schools help our children become critical thinkers with intact moral reasoning, citizens who can resist indoctrination and prejudice.
Voigtländer N and Voth HJ. "Nazi indoctrination and anti-Semitic beliefs in Germany" Proceedings of the National Academy of Sciences (2015), doi: 10.1073/pnas.1414822112
Artificially Flavored Intelligence
"I see your infinite form in every direction,
with countless arms, stomachs, faces, and eyes."
~ Bhagavad-Gītā 11 16
About ten days ago, someone posted on an image on Reddit, a sprawling site that is the Internet's version of a clown car that's just crashed into a junk shop. The image, appropriately uploaded to the 'Creepy' corner of the website, is kind of hard to describe, so, assuming that you are not at the moment on any strong psychotropic substances, or are not experiencing a flashback, please have a good, long look before reading on.
What the hell is that thing? Our sensemaking gear immediately kicks into overdrive. If Cthulhu had had a pet slug, this might be what it looked like. But as you look deeper into the picture, all sorts of other things begin to emerge. In the lower left-hand corner there are buildings and people, and people sitting on buildings which might themselves be on wheels. The bottom center of the picture seems to be occupied by some sort of a lurid, lime-colored fish. In the upper right-hand corner, half-formed faces peer out of chalices. The background wallpaper evokes an unholy copulation of brain coral and astrakhan fur. And still there are more faces, or at least eyes. There are indeed more eyes than an Alex Grey painting, and they hew to none of the neat symmetries that make for a safe world. In fact, the deeper you go into the picture, the less perspective seems to matter, as solid surfaces dissolve into further cascades of phantasmagoria. The same effect applies to the principal thing, which has not just an indeterminate number of eyes, ears or noses, but even heads.
The title of the thread wasn't very helpful, either: "This image was generated by a computer on its own (from a friend working on AI)". For a few days, that was all anyone knew, but it was enough to incite another minor-scale freakout about the nature and impending arrival of Our Computer Overlords. Just as we are helpless to not over-interpret the initial picture, so we are all too willing to titillate ourselves with alarmist speculations concerning its provenance. This was presented as a glimpse into the psychedelic abyss of artificial intelligence; an unspeakable, inscrutable intellect briefly showed us its cards, and it was disquieting, to put it mildly. Is that what AI thinks life looks like? Or stated even more anxiously, is that what AI thinks life should look like?
Alas, our giddy Lovecraftian fantasies weren't allowed to run amok for more than a few days, since the boffins at Google tipped their hand with a blog post describing what was going on. The image, along with many others, were the result of a few engineers playing around with neural networks, and seeing how far they could push them. In this case, a neural network is ‘trained' to recognize something when it is fed thousands of instances of that thing. So if the engineers want to train a neural network to recognize the image of a dog, they will keep feeding it images of the same, until it acquires the ability to identify dogs in pictures it hasn't seen before. For the purposes of this essay, I'll just leave it at that, but here is a good explanation of how neural networks ‘learn'.
The networks in question were trained to recognize animals, people and architecture. But things got interesting when the Google engineers took a trained neural net and fed it only one input – over and over again. Once slightly modified, the image was then re-submitted to the network. If it were possible to imagine the network having a conversation with itself, it may go something like this:
First pass: Ok, I'm pretty good at finding squirrels and dogs and fish. Does this picture have any of these things in it? Hmmm, no, although that little blob looks like it might be the eye of one of those animals. I'll make a note of that. Also that lighter bit looks like fur. Yeah. Fur.
Second pass: Hey, that blob definitely looks like an eye. I'll sharpen it up so that it's more eye-like, since that's obviously what it is. Also, that fur could look furrier.
Third pass: That eye looks like it might go with that other eye that's not that far off. That other dark bit in between might just be the nose that I'd need to make it a dog. Oh wow – it is a dog! Amazing.
The results are essentially thousands of such decisions made across dozens of layers of the network. Each layer of ‘neurons' hands over its interpretation to the next layer up the hierarchy, and a final decision of what to emphasize or de-emphasize is made by the last layer. The fact that half of a squirrel's face may be interpellated within the features of the dog's face is, in the end, irrelevant.
But I also feel very wary about having written this fantasy monologue, since framing the computational process as a narrative is something that makes sense to us, but in fact isn't necessarily true. By way of comparison, the philosopher Jacques Derrida was insanely careful about stating what he could claim in any given act of writing, and did so while he was writing. Much to the consternation of many of his readers, this act of deconstructing the text as he was writing it was nevertheless required for him to be accurate in making his claims. Similarly, while the anthropomorphic cheat is perhaps the most direct way of illustrating how AI ‘works', it is also very seductive and misleading. I offer up the above with the exhortation that there is no thinking going on. There is no goofy conversation. There is iteration, and interpretation, and ultimately but entirely tangentially, weirdness. The neural network doesn't think it's weird, however. The neural network doesn't think anything, at least not in the overly generous way in which we deploy that word.
So, echoing a deconstructionist approach, we would claim that the idea of ‘thinking' is really the problem. It is a sort of absent center, where we jam in all the unexamined assumptions that we need in order to keep the system intact. Once we really ask what we mean by ‘thinking' then the whole idea of intelligence, whether we are speaking of our own human one, let alone another's, becomes strange and unwhole. So if we then try to avoid the word – and therefore the idea behind the word – ‘thinking' as ascribed to a computer program, then how ought we think about this? Because – sorry – we really don't have a choice but to think about it.
I believe that there are more accurate metaphors to be had, ones that rely on narrower views of our subjectivity, not the AI's. For example, there is the children's game of telephone, where a phrase is whispered from one ear to the next. Given enough iterations, what emerges is a garbled, nonsensical mangling of the original, but one that is hopefully still entertaining. But if it amuses, this is precisely because it remains within the realm of language. The last person does not recite a random string of alphanumeric characters. Rather, our drive to recognize patterns, also known as apophenia, yields something that can still be spoken. It is just weird enough, which is a fine balance indeed.
What did you hear? To me, it sounds obvious that a female voice is repeating "no way" to oblivion. But other listeners have variously reported window, welcome, love me, run away, no brain, rainbow, raincoat, bueno, nombre, when oh when, mango, window pane, Broadway, Reno, melting, or Rogaine.
This illustrates the way that our expectations shape our perception…. We are expecting to hear words, and so our mind morphs the ambiguous input into something more recognisable. The power of expectation might also underlie those embarrassing situations where you mishear a mumbled comment, or even explain the spirit voices that sometimes leap out of the static on ghost hunting programmes.
Even more radical are Steve Reich's tape loop pieces, which explore the effects of when a sound gradually goes out of phase with itself. In fact, 2016 will be the 50th anniversary of "Come Out", one of the seminal explorations of this idea. While the initial phrase is easy to understand, as the gap in phase widens we struggle to maintain its legibility. Not long into the piece, the words are effectively erased, and we find ourselves swimming in waves of pure sound. Nevertheless, our mental apparatus stills seeks to make some sort of sense of it all, it's just that the patterns don't obtain for long enough in order for a specific interpretation to persist.
Of course, the list of contraptions meant to isolate and provoke our apophenic tendencies is substantial, and oftentimes touted as having therapeutic benefits. We slide into sensory deprivation tanks to gape at the universe within, and assemble mail-order DIY ‘brain machines' to ‘expand our brain's technical skills'. This is mostly bunk, but all are predicated on the idea that the brain will produce its own stimuli when external ones are absent, or if there is only a narrow band of stimulus available. In the end, what we experience here is not so much an epiphany, as apophany.
In effect, what Google's engineers have fabricated is an apophenic doomsday machine. It does one thing – search for patterns in the ways in which it knows how – and it does those things very, very well. A neural network trained to identify animals will not suddenly begin to find architectural features in a given input image. It will, if given the picture of a building façade, find all sorts of animals that, in its judgment, already lurk there. The networks are even capable of teasing out the images with which they are familiar if given a completely random picture – the graphic equivalent of static. These are perhaps the most compelling images of all. It's the equivalent of putting a neural network in an isolation tank. But is it? The slide into anthropomorphism is so effortless.
And although the Google blog post isn't clear on this, I suspect that there is also no clear point at which the network is ‘finished'. An intrinsic part of thinking is knowing when to stop, whereas iteration needs some sort of condition wrapped around the loop, otherwise it will never end. You don't tell a computer to just keep adding numbers, you tell it to add only the first 100 numbers you give it. Otherwise the damned thing won't stop. The engineers ran the iterations up until a certain point, and it doesn't really matter if that point was determined by a pre-existing test condition (eg, ‘10,000 iterations') or a snap aesthetic judgment (eg, ‘This is maximum weirdness!'). The fact is that human judgment is the wrapper around the process that creates these images. So if we consider that a fundamental feature of thinking is knowing when to stop doing so, then we find this trait lacking in this particular application of neural networks.
In addition to knowing when to stop, there is another critical aspect of thinking as we know it, and that is forgetting. In ‘Funes el memorioso', Jorge Luis Borges speculated on the crippling consequences of a memory so perfect that nothing was ever lost. Among other things, the protagonist Funes can only live a life immersed in an ocean of detail, "incapable of general, platonic ideas". In order to make patterns, we have to privilege one thing over another, and dismiss vast quantities of sensory information as irrelevant, if not outright distracting or even harmful.
Interestingly enough, this relates to a theory concerning the nature of the schizophrenic mind (in a further nod to the deconstructionist tendency, I concede that the term ‘schizophrenia' is not unproblematic, but allow me the assumption). The ‘hyperlearning hypothesis' claims that schizophrenic symptoms can arise from a surfeit of dopamine in the brain. As a key neurotransmitter, dopamine plays a crucial role in memory formation:
When the brain is rewarded unexpectedly, dopamine surges, prompting the limbic "reward system" to take note in order to remember how to replicate the positive experience. In contrast, negative encounters deplete dopamine as a signal to avoid repeating them. This is a key learning mechanism which is also involves memory-formation and motivation. Scientists believe the brain establishes a new temporary neural network to process new stimuli. Each repetition of the same experience triggers the identical neural firing sequence along an identical neural journey, with every duplication strengthening the synaptic links among the neurons involved. Neuroscientists say, "Neurons that fire together wire together." If this occurs enough times, a secure neural network is established, as if imprinted, and the brain can reliably access the information over time.
The hyperlearning hypothesis posits that schizophrenics have too much dopamine in their brains, too much of the time. Take the process described above and multiply it by orders of magnitude. The result is a world that a schizophrenic cannot make sense of, because literally everything is important, or no one thing is less important than anything else. There is literally no end to thinking, no conditional wrapper to bring anything to a conclusion.
Unsurprisingly, the artificial neural networks discussed above are modeled on precisely this process of reinforcement, except that the dopamine is replaced by an algorithmic stand-in. In 2011, Uli Grasemann and Risto Miikkulainen did the logical next step: they took a neural network called DISCERN and cranked up its virtual dopamine.
Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information – not as distinct units, but as statistical relationships of words, sentences, scripts and stories.
In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate -- essentially telling it to stop forgetting so much.
After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.
Even though I find this infinitely more terrifying than a neural net's ability to create a picture of a multi-headed dog-slug-squirrel, I still contend that there is no thinking going on, as we would like to imagine it. And we would very much like to imagine it: even the article cited above has as its headline ‘Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain'. It's almost as if schizophrenia is something you can pack into a syringe, virtual or otherwise, and inject it into the neural network of your choice, virtual or otherwise. (The actual peer-reviewed article is more soberly titled ‘Using computational patients to evaluate illness mechanisms in schizophrenia'.) We would be much better off understanding these neural networks as tools that provide us with a snapshot of a particular and narrow process. They are no more anthropomorphic than the shapes that clouds may suggest to us on a summer's afternoon. But we seem incapable of forgetting this. If we cannot learn to restrain our relentless pattern-seeking, consider what awaits us on the other end of the spectrum: it is not coincidental that the term ‘apophenia' was coined in 1958 by Klaus Conrad in a monograph on the inception of schizophrenia.
Monday, May 25, 2015
The “Invisible Web” Undermines Health Information Privacy
by Jalees Rehman
"The goal of privacy is not to protect some stable self from erosion but to create boundaries where this self can emerge, mutate, and stabilize. What matters here is the framework— or the procedure— rather than the outcome or the substance. Limits and constraints, in other words, can be productive— even if the entire conceit of "the Internet" suggests otherwise.
Evgeny Morozov in "To Save Everything, Click Here: The Folly of Technological Solutionism"
We cherish privacy in health matters because our health has such a profound impact on how we interact with other humans. If you are diagnosed with an illness, it should be your right to decide when and with whom you share this piece of information. Perhaps you want to hold off on telling your loved ones because you are worried about how it might affect them. Maybe you do not want your employer to know about your diagnosis because it could get you fired. And if your bank finds out, they could deny you a mortgage loan. These and many other reasons have resulted in laws and regulations that protect our personal health information. Family members, employers and insurances have no access to your health data unless you specifically authorize it. Even healthcare providers from two different medical institutions cannot share your medical information unless they can document your consent.
The recent study "Privacy Implications of Health Information Seeking on the Web" conducted by Tim Libert at the Annenberg School for Communication (University of Pennsylvania) shows that we have a for more nonchalant attitude regarding health privacy when it comes to personal health information on the internet. Libert analyzed 80,142 health-related webpages that users might come across while performing online searches for common diseases. For example, if a user uses Google to search for information on HIV, the Center for Disease Control and Prevention (CDC) webpage on HIV/AIDS (http://www.cdc.gov/hiv/) is one of the top hits and users will likely click on it. The information provided by the CDC will likely provide solid advice based on scientific results but Libert was more interested in investigating whether visits to the CDC website were being tracked. He found that by visiting the CDC website, information of the visit is relayed to third-party corporate entities such as Google, Facebook and Twitter. The webpage contains "Share" or "Like" buttons which is why the URL of the visited webpage (which contains the word "HIV") is passed on to them – even if the user does not explicitly click on the buttons.
Libert found that 91% of health-related pages relay the URL to third parties, often unbeknownst to the user, and in 70% of the cases, the URL contains sensitive information such as "HIV" or "cancer" which is sufficient to tip off these third parties that you have been searching for information related to a specific disease. Most users probably do not know that they are being tracked which is why Libert refers to this form of tracking as the "Invisible Web" which can only be unveiled when analyzing the hidden http requests between the servers. Here are some of the most common (invisible) partners which participate in the third-party exchanges:
Entity Percent of health-related pages
What do the third parties do with your data? We do not really know because the laws and regulations are rather fuzzy here. We do know that Google, Facebook and Twitter primarily make money by advertising so they could potentially use your info and customize the ads you see. Just because you visited a page on breast cancer does not mean that the "Invisible Web" knows your name and address but they do know that you have some interest in breast cancer. It would make financial sense to send breast cancer related ads your way: books about breast cancer, new herbal miracle cures for cancer or even ads by pharmaceutical companies. It would be illegal for your physician to pass on your diagnosis or inquiry about breast cancer to an advertiser without your consent but when it comes to the "Invisible Web" there is a continuous chatter going on in the background about your health interests without your knowledge.
Some users won't mind receiving targeted ads. "If I am interested in web pages related to breast cancer, I could benefit from a few book suggestions by Amazon," you might say. But we do not know what else the information is being used for. The appearance of the data broker Experian on the third-party request list should serve as a red flag. Experian's main source of revenue is not advertising but amassing personal data for reports such as credit reports which are then sold to clients. If Experian knows that you are checking out breast cancer pages then you should not be surprised if this information will be stored in some personal data file about you.
How do we contain this sharing of personal health information? One obvious approach is to demand accountability from the third parties regarding the fate of your browsing history. We need laws that regulate how information can be used, whether it can be passed on to advertisers or data brokers and how long the information is stored.
We may use information we collect about you to:
· Administer your account;
· Provide you with access to particular tools and services;
· Respond to your inquiries and send you administrative communications;
· Obtain your feedback on our sites and our offerings;
· Statistically analyze user behavior and activity;
· Provide you and people with similar demographic characteristics and interests with more relevant content and advertisements;
· Conduct research and measurement activities;
· Send you personalized emails or secure electronic messages pertaining to your health interests, including news, announcements, reminders and opportunities from WebMD; or
· Send you relevant offers and informational materials on behalf of our sponsors pertaining to your health interests.
Perhaps one of the most effective solutions would be to make the "Invisible Web" more visible. If health-related pages were mandated to disclose all third-party requests in real-time such as pop-ups ("Information about your visit to this page is now being sent to Amazon") and ask for consent in each case, users would be far more aware of the threat to personal privacy posed by health-related pages. Such awareness of health privacy and potential threats to privacy are routinely addressed in the real world and there is no reason why this awareness should not be extended to online information.
Libert, Tim. "Privacy implications of health information seeking on the Web" Communications of the ACM, Vol. 58 No. 3, Pages 68-77, March 2015, doi: 10.1145/2658983 (PDF)
Monday, April 27, 2015
Murder Your Darling Hypotheses But Do Not Bury Them
by Jalees Rehman
"Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings."
Sir Arthur Quiller-Couch (1863–1944). On the Art of Writing. 1916
Murder your darlings. The British writer Sir Arthur Quiller Crouch shared this piece of writerly wisdom when he gave his inaugural lecture series at Cambridge, asking writers to consider deleting words, phrases or even paragraphs that are especially dear to them. The minute writers fall in love with what they write, they are bound to lose their objectivity and may not be able to judge how their choice of words will be perceived by the reader. But writers aren't the only ones who can fall prey to the Pygmalion syndrome. Scientists often find themselves in a similar situation when they develop "pet" or "darling" hypotheses.
How do scientists decide when it is time to murder their darling hypotheses? The simple answer is that scientists ought to give up scientific hypotheses once the experimental data is unable to support them, no matter how "darling" they are. However, the problem with scientific hypotheses is that they aren't just generated based on subjective whims. A scientific hypothesis is usually put forward after analyzing substantial amounts of experimental data. The better a hypothesis is at explaining the existing data, the more "darling" it becomes. Therefore, scientists are reluctant to discard a hypothesis because of just one piece of experimental data that contradicts it.
In addition to experimental data, a number of additional factors can also play a major role in determining whether scientists will either discard or uphold their darling scientific hypotheses. Some scientific careers are built on specific scientific hypotheses which set apart certain scientists from competing rival groups. Research grants, which are essential to the survival of a scientific laboratory by providing salary funds for the senior researchers as well as the junior trainees and research staff, are written in a hypothesis-focused manner, outlining experiments that will lead to the acceptance or rejection of selected scientific hypotheses. Well written research grants always consider the possibility that the core hypothesis may be rejected based on the future experimental data. But if the hypothesis has to be rejected then the scientist has to explain the discrepancies between the preferred hypothesis that is now falling in disrepute and all the preliminary data that had led her to formulate the initial hypothesis. Such discrepancies could endanger the renewal of the grant funding and the future of the laboratory. Last but not least, it is very difficult to publish a scholarly paper describing a rejected scientific hypothesis without providing an in-depth mechanistic explanation for why the hypothesis was wrong and proposing alternate hypotheses.
For example, it is quite reasonable for a cell biologist to formulate the hypothesis that protein A improves the survival of neurons by activating pathway X based on prior scientific studies which have shown that protein A is an activator of pathway X in neurons and other studies which prove that pathway X improves cell survival in skin cells. If the data supports the hypothesis, publishing this result is fairly straightforward because it conforms to the general expectations. However, if the data does not support this hypothesis then the scientist has to explain why. Is it because protein A did not activate pathway X in her experiments? Is it because in pathway X functions differently in neurons than in skin cells? Is it because neurons and skin cells have a different threshold for survival? Experimental results that do not conform to the predictions have the potential to uncover exciting new scientific mechanisms but chasing down these alternate explanations requires a lot of time and resources which are becoming increasingly scarce. Therefore, it shouldn't come as a surprise that some scientists may consciously or subconsciously ignore selected pieces of experimental data which contradict their darling hypotheses.
Let us move from these hypothetical situations to the real world of laboratories. There is surprisingly little data on how and when scientists reject hypotheses, but John Fugelsang and Kevin Dunbar at Dartmouth conducted a rather unique study "Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory" in 2004 in which they researched researchers. They sat in at scientific laboratory meetings of three renowned molecular biology laboratories at carefully recorded how scientists presented their laboratory data and how they would handle results which contradicted their predictions based on their hypotheses and models.
In their final analysis, Fugelsang and Dunbar included 417 scientific results that were presented at the meetings of which roughly half (223 out of 417) were not consistent with the predictions. Only 12% of these inconsistencies lead to change of the scientific model (and thus a revision of hypotheses). In the vast majority of the cases, the laboratories decided to follow up the studies by repeating and modifying the experimental protocols, thinking that the fault did not lie with the hypotheses but instead with the manner how the experiment was conducted. In the follow up experiments, 84 of the inconsistent findings could be replicated and this in turn resulted in a gradual modification of the underlying models and hypotheses in the majority of the cases. However, even when the inconsistent results were replicated, only 61% of the models were revised which means that 39% of the cases did not lead to any significant changes.
The study did not provide much information on the long-term fate of the hypotheses and models and we obviously cannot generalize the results of three molecular biology laboratory meetings at one university to the whole scientific enterprise. Also, Fugelsang and Dunbar's study did not have a large enough sample size to clearly identify the reasons why some scientists were willing to revise their models and others weren't. Was it because of varying complexity of experiments and models? Was it because of the approach of the individuals who conducted the experiments or the laboratory heads? I wish there were more studies like this because it would help us understand the scientific process better and maybe improve the quality of scientific research if we learned how different scientists handle inconsistent results.
In my own experience, I have also struggled with results which defied my scientific hypotheses. In 2002, we found that stem cells in human fat tissue could help grow new blood vessels. Yes, you could obtain fat from a liposuction performed by a plastic surgeon and inject these fat-derived stem cells into animal models of low blood flow in the legs. Within a week or two, the injected cells helped restore the blood flow to near normal levels! The simplest hypothesis was that the stem cells converted into endothelial cells, the cell type which forms the lining of blood vessels. However, after several months of experiments, I found no consistent evidence of fat-derived stem cells transforming into endothelial cells. We ended up publishing a paper which proposed an alternative explanation that the stem cells were releasing growth factors that helped grow blood vessels. But this explanation was not as satisfying as I had hoped. It did not account for the fact that the stem cells had aligned themselves alongside blood vessel structures and behaved like blood vessel cells.
Even though I "murdered" my darling hypothesis of fat –derived stem cells converting into blood vessel endothelial cells at the time, I did not "bury" the hypothesis. It kept ruminating in the back of my mind until roughly one decade later when we were again studying how stem cells were improving blood vessel growth. The difference was that this time, I had access to a live-imaging confocal laser microscope which allowed us to take images of cells labeled with red and green fluorescent dyes over long periods of time. Below, you can see a video of human bone marrow mesenchymal stem cells (labeled green) and human endothelial cells (labeled red) observed with the microscope overnight. The short movie compresses images obtained throughout the night and shows that the stem cells indeed do not convert into endothelial cells. Instead, they form a scaffold and guide the endothelial cells (red) by allowing them to move alongside the green scaffold and thus construct their network. This work was published in 2013 in the Journal of Molecular and Cellular Cardiology, roughly a decade after I had been forced to give up on the initial hypothesis. Back in 2002, I had assumed that the stem cells were turning into blood vessel endothelial cells because they aligned themselves in blood vessel like structures. I had never considered the possibility that they were scaffold for the endothelial cells.
This and other similar experiences have lead me to reformulate the "murder your darlings" commandment to "murder your darling hypotheses but do not bury them". Instead of repeatedly trying to defend scientific hypotheses that cannot be supported by emerging experimental data, it is better to give up on them. But this does not mean that we should forget and bury those initial hypotheses. With newer technologies, resources or collaborations, we may find ways to explain inconsistent results years later that were not previously available to us. This is why I regularly peruse my cemetery of dead hypotheses on my hard drive to see if there are ways of perhaps resurrecting them, not in their original form but in a modification that I am now able to test.
Fugelsang, Jonathan A.; Stein, Courtney B.; Green, Adam E.; Dunbar, Kevin N. (2004) "Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory" Canadian Journal of Experimental Psychology Vol 58(2) 86-95.http://dx.doi.org/10.1037/h0085799
Monday, March 30, 2015
STEM Education Promotes Critical Thinking and Creativity: A Response to Fareed Zakaria
by Jalees Rehman
All obsessions can be dangerous. When I read the title "Why America's obsession with STEM education is dangerous" of Fareed Zakaria's article in the Washington Post, I assumed that he would call for more balance in education. An exclusive focus on STEM (science, technology, engineering and mathematics) is unhealthy because students miss out on the valuable knowledge that the arts and humanities teach us. I would wholeheartedly agree with such a call for balance because I believe that a comprehensive education makes us better human beings. This is the reason why I encourage discussions about literature and philosophy in my scientific laboratory. To my surprise and dismay, Zakaria did not analyze the respective strengths of liberal arts education and STEM education. Instead, his article is laced with odd clichés and misrepresentations of STEM.
Misrepresentation #1: STEM teaches technical skills instead of critical thinking and creativity
If Americans are united in any conviction these days, it is that we urgently need to shift the country's education toward the teaching of specific, technical skills. Every month, it seems, we hear about our children's bad test scores in math and science — and about new initiatives from companies, universities or foundations to expand STEM courses (science, technology, engineering and math) and deemphasize the humanities.
"The United States has led the world in economic dynamism, innovation and entrepreneurship thanks to exactly the kind of teaching we are now told to defenestrate. A broad general education helps foster critical thinking and creativity."
Zakaria is correct when he states that a broad education fosters creativity and critical thinking but his article portrays STEM as being primarily focused on technical skills whereas liberal education focuses on critical thinking and creativity. Zakaria's view is at odds with the goals of STEM education. As a scientist who mentors Ph.D students in the life sciences and in engineering, my goal is to help our students become critical and creative thinkers.
Students learn technical skills such as how to culture cells in a dish, insert DNA into cells, use microscopes or quantify protein levels but these technical skills are not the focus of the educational program. Learning a few technical skills is easy but the real goal is for students to learn how to develop innovative scientific hypotheses, be creative in terms of designing experiments that test those hypotheses, learn how to be critical of their own results and use logic to analyze their experiments.
My own teaching and mentoring experience focuses on STEM graduate students but the STEM programs that I have attended at elementary and middle schools also emphasize teaching basic concepts and critical thinking instead of "technical skills". The United States needs to promote STEM education because of the prevailing science illiteracy in the country and not because it needs to train technically skilled worker bees. Here are some examples of science illiteracy in the US: Fort-two percent of Americans are creationists who believe that God created humans in their present form within the last 10,000 years or so. Fifty-two percent of Americans are unsure whether there is a link between vaccines and autism and six percent are convinced that vaccines can cause autism even though there is broad consensus among scientists from all over the world that vaccines do NOT cause autism. And only sixty-one percent are convinced that there is solid evidence for global warming.
A solid STEM education helps citizens apply critical thinking to distinguish quackery from true science, benefiting their own well-being as well as society.
Zakaria's criticism of obsessing about test scores is spot on. The subservience to test scores undermines the educational system because some teachers and school administrators may focus on teaching test-taking instead of critical thinking and creativity. But this applies to the arts and humanities as well as the STEM fields because language skills are also assessed by standardized tests. Just like the STEM fields, the arts and humanities have to find a balance between teaching required technical skills (i.e. grammar, punctuation, test-taking strategies, technical ability to play an instrument) and the more challenging tasks of teaching students how to be critical and creative.
Misrepresentation #2: Japanese aren't creative
Zakaria's views on Japan are laced with racist clichés:
"Asian countries like Japan and South Korea have benefitted enormously from having skilled workforces. But technical chops are just one ingredient needed for innovation and economic success. America overcomes its disadvantage — a less-technically-trained workforce — with other advantages such as creativity, critical thinking and an optimistic outlook. A country like Japan, by contrast, can't do as much with its well-trained workers because it lacks many of the factors that produce continuous innovation."
Some of the most innovative scientific work in my own field of scientific research – stem cell biology – is carried out in Japan. Referring to Japanese as "well-trained workers" does not do justice to the innovation and creativity in the STEM fields and it also conveniently ignores Japanese contributions to the arts and humanities. I doubt that the US movie directors who have re-made Kurosawa movies or the literary critics who each year expect that Haruki Murakami will receive the Nobel Prize in Literature would agree with Zakaria.
Misrepresentation #3: STEM does not value good writing
Writing well, good study habits and clear thinking are important. But Zakaria seems to suggest that these are not necessarily part of a good math and science education:
"No matter how strong your math and science skills are, you still need to know how to learn, think and even write. Jeff Bezos, the founder of Amazon (and the owner of this newspaper), insists that his senior executives write memos, often as long as six printed pages, and begins senior-management meetings with a period of quiet time, sometimes as long as 30 minutes, while everyone reads the "narratives" to themselves and makes notes on them. In an interview with Fortune's Adam Lashinsky, Bezos said: "Full sentences are harder to write. They have verbs. The paragraphs have topic sentences. There is no way to write a six-page, narratively structured memo and not have clear thinking."
Communicating science is an essential part of science. Until scientific work is reviewed by other scientists and published as a paper it is not considered complete. There is a substantial amount of variability in the quality of writing among scientists. Some scientists are great at logically structuring their papers and conveying the core ideas whereas other scientific papers leave the reader in a state of utter confusion. What Jeff Bezos proposes for his employees is already common practice in the STEM world. In preparation for scientific meetings and discussions, scientists structure their ideas into outlines for manuscripts or grant proposals using proper paragraphs and sentences. Well-written scientific manuscripts are highly valued but the overall quality of writing in the STEM fields could be greatly improved. However, the same probably also holds true for people with a liberal arts education. Not every philosopher is a great writer. Decoding the human genome is a breeze when compared to decoding certain postmodern philosophical texts.
Misrepresentation #4: We should study the humanities and arts because Silicon Valley wants us to.
In support of his arguments for a stronger liberal arts education, Zakaria primarily quotes Silicon Valley celebrities such as Steve Jobs, Mark Zuckerberg and Jeff Bezos. The article suggests that a liberal arts education will increase entrepreneurship and protect American jobs. Are these the main reasons for why we need to reinvigorate liberal arts education? The importance of a general, balanced education makes a lot of sense to me but is increased job security a convincing argument for pursuing a liberal arts degree? Instead of a handful of anecdotal comments by Silicon Valley prophets, I would prefer to see some actual data that supports Zakaria's assertion. But perhaps I am being too STEMy.
There is a lot of room to improve STEM education. We have to make sure that we strive to focus on the essence of STEM which is critical thinking and creativity. We should also make a stronger effort to integrate arts and humanities into STEM education. In the same vein, it would be good to incorporate more STEM education into liberal arts education in order to combat scientific illiteracy. Instead of invoking "Two Cultures" scenarios and creating straw man arguments, educators of all fields need to collaborate in order to improve the overall quality of education.
Monday, March 23, 2015
You're on the Air!
by Carol A. Westbrook
The excitement of a live TV broadcast...a breaking news story...a presidential announcement...an appearance of the Beatles on Ed Sullivan. These words conjure up a time when all America would tune in to the same show, and families would gather round their TV set to watch it together.
This is not how we watch TV anymore. It is watched at different times and on different devices, from mobile phones, computers, mobile devices, from previously recorded shows on you DVR, or via streaming service such as Netflix and, soon, Apple. Live news can be viewed on the web, via cell phone apps, or as tweets. An increasing number of people are foregoing TV completely to get news and entertainment from other sources, with content that is never "on the air." (see the chart,below, from the Nov 24, 2013 Business Insider). Many Americans don't even own a television set!
We take it for granted that we will have instant access to video content--whether digital or analog, television, cell phone or iPad. But video itself has its roots in television; the word itself means, "to view over a distance." The story of TV broadcasting is a fascinating one about technology development, entrepreneurship, engineering, and even space exploration. It is an American story, and it is a story worth telling.
At first, America was tuned in to radio. From the early 20's through the 1940s, people would gather around their radios to listen to music and variety shows, serial dramas, news, and special announcements. Yet they dreamed of seeing moving pictures over the airwaves, like they did in newsreels and movies. A series of technical breakthroughs were needed to make this happen.
The first important breakthrough was the invention in 1938 of a way to send and view moving images electronically--Farnsworth's "television." Thus followed a series of patent wars, but at the end of the day, we had television sets which could be used to view moving pictures transmitted by the airwaves. In 1939, RCA televised the opening of the New York Worlds Fair, including a speech by the first President to appear on TV, President Franklin D. Roosevelt. There were few televisions to watch it on, though, until after the end of World War II, when America's demand for commercial television rapidly increased.
This led to the next big advance in television--network broadcasting. The big radio broadcast companies such as RCA (Radio Corporation of America) and CBS (Columbia Broadcasting System) naturally expanded into this media, but their infrastructure was limited. Though the frequencies used for AM radio transmission, from 540 to 1780 kHz (kHz means cycles per second) can travel long distances from their transmitting stations, each wavelength can only carry a limited amount of signal energy; in other words, it has a narrow bandwidth. Much higher frequency wavelengths, in the megahertz range (million cycles per second) are required for television so they can carry the additional information needed for picture as well as sound. As a result there was a scramble for higher frequency wavelengths, which was mediated by the FCC (Federal Communications Commission), the entity that regulates broadcasting. In 1948 the FCC allocated the higher frequency bands, designating which ones would be reserved for radio, and which ones for television, and and assigned channel numbers to the TV bands. The VHF television channels were designated 2 - 13. Channel 1 was reallocated to public and emergency communications, which explains why your TV starts with Channel 2! Several higher frequencies, designated as UHF, were reserved for later TV use, including channels 32 to 70. The FCC also froze the number of station licenses at 108 in 1948.
Because the number of broadcast stations was limited, TV was available only if you lived within range of a broadcast network, primarily CBS, NBC or ABC. In other words, if you lived a large city--New York, Chicago, Washington, Philadelphia, Boston, Los Angeles, Seattle or Salt Lake City. Outside of these areas, you might have a chance if you lived on a hill, put up a very high antenna, and prayed for a thermal inversion or a charged ionosphere to propagate the short signal to your television. My husband Rick, an electrical engineer and amateur radio buff, recounts that he watched the coronation of Queen Elizabeth in 1952 from his TV set in a small town in Pennsylvania, due to an environmental quirk (sunspots?), but everyone else had to wait for the films to cross the Atlantic and be shown on their local station.
Yet, for those of us who lived in a prime location, there was an ever-expanding number of programs to watch, such as the Texaco Star Theater, the Milton Bearle Show, and a variety of news shows. Many of us grew up on Howdy Doody, or shows created locally and televised live. I recall walking home from grade school for lunch as a child in Chicago, spending an hour watching "Lunchtime Little Theater," before returning to school to finish the afternoon's lessons! Many of these early shows have been lost, as they were never recorded, and video had not yet been invented.
Television broadcasting eventually went nationwide, thanks to microwave transmission, which developed out of WWII radar. This technology was used to relay television broadcasts to local affiliate stations, which could then broadcast them on their regular channels in the local area. Microwaves use point-to-point transmission, from one microwave tower to the next, and microwave towers were constructed to span the continent. The FCC increased the number of television station licenses, and the broadcast companies truly became "networks." Finally, everyone could watch the same shows at the same time.
But TV was still limited geographically--it could not cross the ocean. This problem was not solved until the third important technology was developed, that of satellite broadcasting. Sputnik, the first space satellite, was launched in 1957. Five years later, July 23, 1962, the first satellite-based transatlantic broadcast took place using the Telstar satellite to relay TV signals from the US ground station in Andover, Maine, to the receiving stations in Goonhilly Downs, England and Pleumeur-Bodou, France.
It's fun to watch this broadcast, which was introduced by Walter Cronkite, and began with a split screen showing the Statue of Liberty on the left and the Eiffel tower on the Right. The satellite transmission was followed a live broadcast of an ongoing baseball game in Chicago's Wrigley Field between the Philadelphia Phillies and the Chicago Cubs, and also included live remarks from President Kennedy, as well as footage from Cape Canaveral, Florida, Seattle, and Canada. I've included a short clip of the Kennedy broadcast.
If you looked up at the night in 1962, you might see the Telstar satellite zoom across your backyard sky. It took about 20 minutes to traverse, passing every 2.5 hours. Broadcast signals could only be transmitted to Telstar and back to land stations on either side of the Atlantic only during this 20-minute transit time, so the tracking satellite dishes had to be fast-moving; they also had to be very large to capture such a weak signal. It is impressive to see the massive size of the dishes in these satellite ground stations, and, and to imagine how quickly they had to move to sweep the sky. This picture of Goonhilly Downs gives you an idea of their size.
Although Telstar demonstrated that satellite transmission was possible for long-range broadcasting, the equipment and precision needed for tracking a rapidly-moving low-earth satellite was onerous. So the space scientists at NASA and Bell Labs launched the next generation of satellites, named "Syncom," into high earth orbit at just the right distance from the earth so that their speed matched the speed of the earth's rotation. When orbiting directly above the equator, the Syncom satellites appeared to be stationery over a single geographic location. Thus, the geostationary (or geosynchronous) satellite was born.
Stationery satellites paved the way for a tremendous expansion in telecommunications, and are still in widespread use. Satellites enabled the rise of cable TV networks such as HBO and CNN in the 1970s, which broadcast without having to go through FCC-regulated television transmitting stations. Instead, their programming was sent via satellite to the cable service, and from there selected programs went by cable to the TV of paid subscribers. These stations could also be accessed through Satellite TV subscription, such as Galaxy, which broadcast them directly to their customers' satellite dishes. Because early satellites could only carry a limited number of cable channels, multiple satellites had to be accessed to provide the purchased programming. Moveable satellite dishes of about four to twelve feet in diameter were positioned in the subscriber's yards or on their roof. Satellite TV further expanded American's access to television, reaching rural communities that had limited (or no) cable service and poor antenna reception; they also provided special paid programming, such as sports events watched at bars. This picture shows a 10-foot moveable dish in my yard in Indiana.
Stationery TV dishes--such as Direct TV antennas--were not feasible until satellites were able to carry more programming, so the dish could stay parked on only one geosynchronous satellite. The technical advance which allowed this was the development of digital video, in the late 1990's. Digital video would eventually displace analog-- remember when the DVD was introduced, which rendered VCRs obsolete in just a few years' time? Each genosynchronous satellites could now carry many more simultaneous channels than before, since each channel takes up only a small fraction of the bandwidth when compared to analog signals. Digital signals also increased the capacity of traditional TV, broadcast from ground towers, which eventually transferred to the HDTV standards, which broadcast at the high capacity UHF frequencies. The transition to HDTV was completed in June 2009, and the TV networks abandoned analog transmission on the old VHF channels, though many of the newer stations carry the old numbers (2 - 13). TV viewers are surprised to learn that they can watch their favorite channels on the newer HDTV sets using only a simple indoor antenna, and many are giving up their pricey cable services. Digital video signals were ready for growth in other media, as they theoretically be transmitted over the internet or by cell phone, and could be stored easily for re-broadcast.
Yet one more step was needed before widespread internet and cellular-based video could occur, allowing us to watch television programs as we do now. This was not a technical advance but an economic one--the sharp drop in the price of computer memory, which happened about 2009. Prior to that, computers had a lot less memory and storage capacity. Perhaps you remember the agony of trying to watch a YouTube video in its early years? Or of waiting for your browser to load? Now we take it for granted that we can view digitized images, create them, share them, watch pre-recorded programs, and record on our TIVO from multiple sources. There seems to be no limit to the ways that we can enjoy television, truly viewing "pictures at a distance." It is a far cry from the early years of television that many of us still remember, when we all watched a small, black-and-white screen with poor sound, to watch John, Paul, George and Ringo sing "I Love You." Now those were the days!
Thanks to my husband Rick Rikoski, for his patient and helpful explanations of the technology of television and its early development.
Monday, March 02, 2015
Does Thinking About God Increase Our Willingness to Make Risky Decisions?
by Jalees Rehman
There are at least two ways of how the topic of trust in God is broached in Friday sermons that I have attended in the United States. Some imams lament the decrease of trust in God in the age of modernity. Instead of trusting God that He is looking out for the believers, modern day Muslims believe that they can control their destiny on their own without any Divine assistance. These imams see this lack of trust in God as a sign of weakening faith and an overall demise in piety. But in recent years, I have also heard an increasing number of sermons mentioning an important story from the Muslim tradition. In this story, Prophet Muhammad asked a Bedouin why he was leaving his camel untied and thus taking the risk that this valuable animal might wander off and disappear. When the Bedouin responded that he placed his trust in God who would ensure that the animal stayed put, the Prophet told him that he still needed to first tie up his camel and then place his trust in God. Sermons referring to this story admonish their audience to avoid the trap of fatalism. Just because you trust God does not mean that it obviates the need for rational and responsible action by each individual.
It is much easier for me to identify with the camel-tying camp because I find it rather challenging to take risks exclusively based on the trust in an inscrutable and minimally communicative entity. Both, believers and non-believers, take risks in personal matters such as finance or health. However, in my experience, many believers who make a risky financial decision or take a health risk by rejecting a medical treatment backed by strong scientific evidence tend to invoke the name of God when explaining why they took the risk. There is a sense that God is there to back them up and provide some security if the risky decision leads to a detrimental outcome. It would therefore not be far-fetched to conclude that invoking the name of God may increase risk-taking behavior, especially in people with firm religious beliefs. Nevertheless, psychological research in the past decades has suggested the opposite: Religiosity and reminders of God seem to be associated with a reduction in risk-taking behavior.
Daniella Kupor and her colleagues at Stanford University have recently published the paper "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" which takes a new look at the link between invoking the name of God and risky behaviors. The researchers hypothesized that reminders of God may have opposite effects on varying types of risk-taking behavior. For example, risk-taking behavior that is deemed ‘immoral' such as taking sexual risks or cheating may be suppressed by invoking God, whereas taking non-moral risks, such as making risky investments or sky-diving, might be increased because reminders of God provide a sense of security. According to Kupor and colleagues, it is important to classify the type of risky behavior in relation to how society perceives God's approval or disapproval of the behavior. The researchers conducted a variety of experiments to test this hypothesis using online study participants.
One of the experiments involved running ads on a social media network and then assessing the rate of how often the social media users clicked on slightly different wordings of the ad texts. The researchers ran the ads 452,051 times on accounts registered to users over the age of 18 years residing in the United States. The participants either saw ads for non-moral risk-taking behavior (skydiving), moral risk-taking behavior (bribery) or a control behavior (playing video games) and each ad came either in a 'God version' or a standard version.
Here are the two versions of the skydiving ad (both versions had a picture of a person skydiving):
God knows what you are missing! Find skydiving near you. Click here, feel the thrill!
You don't know what you are missing! Find skydiving near you. Click here, feel the thrill!
The percentage of users who clicked on the skydiving ad in the ‘God version' was twice as high as in the group which saw the standard "You don't know what you are missing" phrasing! One explanation for the significantly higher ad success rate is that "God knows…." might have struck the ad viewers as being rather unusual and piqued their curiosity. Instead of this being a reflection of increased propensity to take risks, perhaps the viewers just wanted to find out what was meant by "God knows…". However, the response to the bribery ad suggests that it isn't just mere curiosity. These are the two versions of the bribery ad (both versions had an image of two hands exchanging money):
Learn How to Bribe!
God knows what you are missing! Learn how to bribe with little risk of getting caught!
Learn How to Bribe!
You don't know what you are missing! Learn how to bribe with little risk of getting caught!
In this case, the ‘God version' cut down the percentage of clicks to less than half of the standard version. The researchers concluded that invoking the name of God prevented the users from wanting to find out more about bribery because they consciously or subconsciously associated bribery with being immoral and rejected by God.
These findings are quite remarkable because they suggest that a a single mention of the word ‘God' in an ad can have opposite effects on two different types of risk-taking, the non-moral thrill of sky-diving versus the immoral risk of taking bribes.
Clicking on an ad for a potentially risky behavior is not quite the same as actually engaging in that behavior. This is why the researchers also conducted a separate study in which participants were asked to answer a set of questions after viewing certain colors. Participants could choose between Option 1 (a short 2 minute survey and receiving an additional 25 cents as a reward) or Option 2 (four minute survey, no additional financial incentive). The participants were also informed that Option 1 was more risky with the following label:
Eye Hazard: Option 1 not for individuals under 18. The bright colors in this task may damage the retina and cornea in the eyes. In extreme cases it can also cause macular degeneration.
In reality, neither of the two options was damaging to the eyes of the participants but the participants did not know this. This set-up allowed the researchers to assess the likelihood of the participants taking the risk of potentially injurious light exposure to their eyes. To test the impact of God reminders, the researchers assigned the participants to read one of two texts, both of which were adapted from Wikipedia, before deciding on Option 1 or Option 2:
Text used for participants in the control group:
"In 2006, the International Astronomers' Union passed a resolution outlining three conditions for an object to be called a planet. First, the object must orbit the sun; second, the object must be a sphere; and third, it must have cleared the neighborhood around its orbit. Pluto does not meet the third condition, and is thus not a planet."
Text used for the participants in the ‘God reminder' group:
"God is often thought of as a supreme being. Theologians have described God as having many attributes, including omniscience (infinite knowledge), omnipotence (unlimited power), omnipresence (present everywhere), and omnibenevolence (perfect goodness). God has also been conceived as being incorporeal (immaterial), a personal being, and the "greatest conceivable existent."
As hypothesized by the researchers, a significantly higher proportion of participants chose the supposedly harmful Option 1 in the ‘God reminder' group (96%) than in the control group (84%). Reading a single paragraph about God's attributes was apparently sufficient to lull more participants into the risk of exposing their eyes to potential harm. The overall high percentage of participants choosing Option 1 even in the control condition is probably due to the fact that it offered a greater financial reward (although it seems a bit odd that participants were willing to sell out their retinas for a quarter, but maybe they did not really take the risk very seriously).
A limitation of the study is that it does not provide any information on whether the impact of mentioning God was dependent on the religious beliefs of the participants. Do ‘God reminders' affect believers as well atheists and agnostics or do they only work in people who clearly identify with a religious tradition? Another limitation is that even though many of the observed differences between the ‘God condition' and the control conditions were statistically significant, the actual differences in numbers were less impressive. For example, in the sky-diving ad experiment, the click-through rate was about 0.03% in the standard ad and 0.06% in the ‘God condition'. This is a doubling but how meaningful is this doubling when the overall click rates are so low? Even the difference between the two groups who read the Wikipedia texts and chose Option 1 (96% vs. 84%) does not seem very impressive. However, one has to bear in mind that all of these interventions were very subtle – inserting a single mention of God into a social media ad or asking participants to read a single paragraph about God.
People who live in societies which are suffused with religion such as the United States or Pakistan are continuously reminded of God, whether they glance at their banknotes, turn on the TV or take a pledge of allegiance in school. If the mere mention of God in an ad can already sway some of us to increase our willingness to take risks, what impact does the continuous barrage of God mentions have on our overall risk-taking behavior? Despite its limitations, the work by Kupor and colleagues provides a fascinating new insight on the link between reminders of God and risk-taking behavior. By demonstrating the need to replace blanket statements regarding the relationship between God, religiosity and risk-taking with a more subtle distinction between moral and non-moral risky behaviors, the researchers are paving the way for fascinating future studies on how religion and mentions of God influence human behavior and decision-making.
Kupor DM, Laurin L, Levav J. "Anticipating Divine Protection? Reminders of God Can Increase Nonmoral Risk Taking" Psychological Science(2015) doi: 10.1177/0956797614563108
Monday, February 02, 2015
Literature and Philosophy in the Laboratory Meeting
by Jalees Rehman
Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group's recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one's peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.
During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.
My research group studies stem cells and tissue engineering, cellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the "big picture" of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.
This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a "teaser" without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren't any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term "culture of science" broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.
One member of our group introduced us to "For Whom the Bell Tolls" by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about "Sycamore Row" by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about "A Tale for a Time Being" by Ruth Ozeki and the difficulties that the book's protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was "Far From the Tree: Parents, Children and the Search for Identity" by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the "Deafness" chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being "disabled" or being "differently abled" and the challenges that parents and children face.
On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn's concept of paradigm shifts in "The Structure of Scientific Revolutions", Karl Popper's principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn's or Karl Popper's ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn't always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or "just" a technological advance?
It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.
Monday, January 05, 2015
Typical Dreams: A Comparison of Dreams Across Cultures
by Jalees Rehman
But I, being poor, have only my dreams;
I have spread my dreams under your feet;
Tread softly because you tread on my dreams.
William Butler Yeats – from "Aedh Wishes for the Cloths of Heaven"
Have you ever wondered how the content of your dreams differs from that of your friends? How about the dreams of people raised in different countries and cultures? It is not always easy to compare dreams of distinct individuals because the content of dreams depends on our personal experiences. This is why dream researchers have developed standardized dream questionnaires in which common thematic elements are grouped together. These questionnaires can be translated into various languages and used to survey and scientifically analyze the content of dreams. Open-ended questions about dreams might elicit free-form, subjective answers which are difficult to categorize and analyze. Therefore, standardized dream questionnaires ask study subjects "Have you ever dreamed of . . ." and provide research subjects with a list of defined dream themes such as being chased, flying or falling.
Dream researchers can also modify the questionnaires to include additional questions about the frequency or intensity of each dream theme and specify the time frame that the study subjects should take into account. For example, instead of asking "Have you ever dreamed of…", one can prompt subjects to focus on the dreams of the last month or the first memory of ever dreaming about a certain theme. Any such subjective assessment of one's dreams with a questionnaire has its pitfalls. We routinely forget most of our dreams and we tend to remember the dreams that are either the most vivid or frequent, as well as the dreams which we may have discussed with friends or written down in a journal. The answers to dream questionnaires may therefore be a reflection of our dream memory and not necessarily the actual frequency of prevalence of certain dream themes. Furthermore, standardized dream questionnaires are ideal for research purposes but may not capture the complex and subjective nature of dreams. Despite these pitfalls, research studies using dream questionnaires provide a fascinating insight into the dream world of large groups of people and identify commonalities or differences in the thematic content of dreams across cultures.
The researcher Calvin Kai-Ching Yu from the Hong Kong Shue Yan University used a Chinese translation of a standardized dream questionnaire and surveyed 384 students at the University of Hong Kong (mostly psychology students; 69% female, 31% male; mean age 21). Here are the results:
Ten most prevalent dream themes in a sample of Chinese students according to Yu (2008):
- Schools, teachers, studying (95%)
- Being chased or pursued (92 %)
- Falling (87 %)
- Arriving too late, e.g., missing a train (81 %)
- Failing an examination (79 %)
- A person now alive as dead (75%)
- Trying again and again to do something (74%)
- Flying or soaring through the air (74%)
- Being frozen with fright (71 %)
- Sexual experiences (70%)
The most prevalent theme was "Schools, teachers, studying". This means that 95% of the study subjects recalled having had dreams related to studying, school or teachers at some point in their lives, whereas only 70% of the subjects recalled dreams about sexual experiences. The subjects were also asked to rank the frequency of the dreams on a 5-point scale (0 = never, 1=seldom, 2= sometimes, 3= frequently, 4= very frequently). For the most part, the most prevalent dreams were also the most frequent ones. Not only did nearly every subject recall dreams about schools, teachers or studying, this theme also received an average frequency score of 2.3, indicating that for most individuals this was a recurrent dream theme – not a big surprise in university students. On the other hand, even though the majority of subjects (57%) recalled dreams of "being smothered, unable to breathe", its average frequency rating was low (0.9), indicating that this was a rare (but probably rather memorable) dream.
How do the dreams of the Chinese students compare to their counterparts in other countries?
Michael Schredl and his colleagues used a similar questionnaire to study the dreams of German university students (nearly all psychology students; 85% female, 15% male; mean age 24) with the following results:
Ten most prevalent dream themes in a sample of German students according to Schredl and colleagues (2004):
- Schools, teachers, studying (89 %)
- Being chased or pursued (89%)
- Sexual experiences (87 %)
- Falling (74 %)
- Arriving too late, e.g., missing a train (69 %)
- A person now alive as dead (68 %)
- Flying or soaring through the air (64%)
- Failing an examination (61 %)
- Being on the verge of falling (57 %)
- Being frozen with fright (56 %)
There is a remarkable overlap in the top ten list of dream themes among Chinese and German students. Dreams about school and about being chased are the two most prevalent themes for Chinese and German students. One key difference is that dreams about sexual experiences are recalled more commonly among German students.
Tore Nielsen and his colleagues administered a dream questionnaire to students at three Canadian universities, thus obtaining data on an even larger study population (over 1,000 students).
Ten most prevalent dream themes in a sample of Canadian students according to Nielsen and colleagues (2003):
- Being chased or pursued (82 %)
- Sexual experiences (77 %)
- Falling (74 %)
- Schools, teachers, studying (67 %)
- Arriving too late, e.g., missing a train (60 %)
- Being on the verge of falling (58 %)
- Trying again and again to do something (54 %)
- A person now alive as dead (54 %)
- Flying or soaring through the air (48%)
- Vividly sensing . . . a presence in the room (48 %)
It is interesting that dreams about school or studying were the most common theme among Chinese and German students but do not even make the top-three list among Canadian students. This finding is perhaps also mirrored in the result that dreams about failing exams are comparatively common in Chinese and German students, but are not found in the top-ten list among Canadian students.
At first glance, the dream content of German students seems to be somehow a hybrid between those of Chinese and Canadian students. Chinese and German students share a higher prevalence of academia-related dreams, whereas sexual dreams are among the most prevalent dreams for both Canadians and Germans. However, I did notice an interesting aberrancy. Chinese and Canadian students dream about "Trying again and again to do something" – a theme which is quite rare among German students. I have simple explanation for this (possibly influenced by the fact that I am German): Germans get it right the first time which is why they do not dream about repeatedly attempting the same task.
The strength of these three studies is that they used similar techniques to assess dream content and evaluated study subjects with very comparable backgrounds: Psychology students in their early twenties. This approach provides us with the unique opportunity to directly compare and contrast the dreams of people who were raised on three continents and immersed in distinct cultures and languages. However, this approach also comes with a major limitation. We cannot easily extrapolate these results to the general population. Dreams about studying and school may be common among students but they are probably rare among subjects who are currently holding a full-time job or are retired. University students are an easily accessible study population but they are not necessarily representative of the society they grow up in. Future studies which want to establish a more comprehensive cross-cultural comparison of dream content should probably attempt to enroll study subjects of varying ages, professions, educational and socio-economic backgrounds.
Despite its limitation, the currently available data on dream content comparisons across countries does suggest one important message: People all over the world have similar dreams.
Yu, Calvin Kai-Ching. "Typical dreams experienced by Chinese people." Dreaming 18.1 (2008): 1-10.
Nielsen, Tore A., et al. "The Typical Dreams of Canadian University Students." Dreaming 13.4 (2003): 211-235.
Schredl, Michael, et al. "Typical dreams: stability and gender differences." The Journal of psychology 138.6 (2004): 485-494.