ABOUT US | QUARK PRIZES | DAG-3QD SYMPOSIA | MONDAY MAGAZINE | ARCHIVES | FOLLOW US |

3 Quarks Daily Advertising

 

 

 

 

Please Subscribe to 3QD

Subscription options:

If you would like to make a one time donation in any amount, please do so by clicking the "Pay Now" button below. You may use any credit or debit card and do NOT need to join Paypal.

The editors of 3QD put in hundreds of hours of effort each month into finding the daily links and poem as well as putting out the Monday Magazine and doing all the behind-the-scenes work which goes into running the site.

If you value what we do, please help us to pay our editors very modest salaries for their time and cover our other costs by subscribing above.

We are extremely grateful for the generous support of our loyal readers. Thank you!

3QD on Facebook

3QD on Twitter

3QD by RSS Feed

3QD by Daily Email

Receive all blogposts at the same time every day.

Enter your Email:


Preview 3QD Email

Recent Comments

Powered by Disqus

Miscellany

Design and Photo Credits

The original site was designed by Mikko Hyppönen and deployed by Henrik Rydberg. It was later upgraded extensively by Dan Balis. The current layout was designed by S. Abbas Raza, building upon the earlier look, and coded by Dumky de Wilde.

The banner images have been provided by Terri Amig, Carla Goller, Tom Hilde, Georg Hofer, Sheherbano Husain, Margit Oberrauch, S. Abbas Raza, Sughra Raza, Margaret Scurlock, Shahzia Sikander, Maria Stockner, and Hartwig Thaler.


Monday, April 23, 2018


The Psychology of Collective Memory

by Jalees Rehman

MemoriesDo you still remember the first day of school when you started first grade? If you were fortunate enough (or in some cases, unfortunate enough) to run into your classmates from way back when, you might sit down and exchange stories about that first day in school. There is a good chance that you and your former classmates may differ in the narratives especially regarding some details but you are bound to also find many common memories. This phenomenon is an example of "collective memory", a term used to describe the shared memories of a group which can be as small as a family or a class of students and as large as a nation. The collective memory of your first day in school refers to a time that you personally experienced but the collective memory of a group can also include vicarious memories consisting of narratives that present-day group members may not have lived through. For example, the collective memory of a family could contain harrowing details of suffering experienced by ancestors who were persecuted and had to abandon their homes. These stories are then passed down from generation to generation and become part of a family's defining shared narrative. This especially holds true for larger groups such as nations. In Germany, the collective memory of the horrors of the holocaust and the Third Reich have a profound impact on how Germans perceive themselves and their identity even if they were born after 1945.

The German scholar Aleida Assmann is an expert on how collective and cultural memory influences society and recently wrote about the importance of collective memory in her essay "Transformation of the Modern Time Regime" (PDF):

All cultures depend upon an ability to bring their past into the present through acts of remembering and remembrancing in order to recover not only acquired experience and valuable knowledge, exemplary models and unsurpassable achievements, but also negative events and a sense of accountability. Without the past there can be no identity, no responsibility, no orientation. In its multiple applications cultural memory greatly enlarges the stock of the creative imagination of a society.

Assmann uses the German word Erinnerungskultur (culture of remembrance) to describe how the collective memory of a society is kept alive and what impact the act of remembrance has on our lives. The Erinnerungskultur widely differs among nations and even in a given nation or society, it may vary over time. It is quite possible that the memories of the British Empire may evoke nostalgia and romanticized images of a benevolent empire in older British citizens whereas younger Brits may be more likely to focus on the atrocities committed by British troops against colonial subjects or the devastating famines in India under British rule.

Much of the research on collective memory has been rooted in the humanities. Historians and sociologists have studied how historical events enter into the collective memory and how the Erinnerungskultur then preserves and publicly interacts with it. However, more recently, cognitive scientists and psychologists have begun exploring the cognitive mechanisms that govern the formation of collective memory. The cognitive sciences have made substantial advances in researching individual memory – such as how we remember, mis-remember or forget events – but much less is known how these processes apply to collective memory. The cognitive scientists William Hirst, Jeremey Yamashiro and Alin Coman recently reviewed the present psychological approaches to study how collective memories are formed and retained, and they divided up the research approaches into two broad categories: Top-down research and bottom-up research.

Top-down research identifies historical or cultural memories that persist in a society and tries to understand the underlying principles. Why do some historical events become part of the collective memory whereas others do not? Why do some societies update their collective memories based on new data whereas others do not? Hirst and his colleagues cite a study which researched how people updated their beliefs following retractions and corrections issued by the media following the 2003 Iraq war. The claims that the Iraqi forces executed coalition prisoners of war after they surrendered or the initial reports about the discovery of weapons of mass destruction were both retracted but Americans were less likely to remember the retraction whereas Germans were more likely to remember the retraction and the corrected version of the information.

Bottom-up research of collective memory, on the other hand, focuses on how individuals perceive events and then communicate these to their peers so that they become part of a shared memory canon. Researchers using this approach focus on the transmission of memory from local individuals to a larger group network and how the transmission or communication between individuals is affected by the environment. In a fascinating study of autobiographical memory, researchers studied how individuals from various nations dated autobiographical events. Turks who had experienced the 1999 earthquake frequently referenced it, similar to Bosnians who used the civil war to date personal events. However, Americans rarely referenced the September 11, 2001 attacks to date personal events. This research suggested that even though some events such as the September 11, 2001 attacks had great historical and political significance, they may not have had as profound a personal impact on the individual lives of Americans as did the civil war in Bosnia.

Hirst and his colleagues point out that cognitive research of collective memory is still in its infancy but the questions raised at the interface of psychology, neuroscience, history and sociology are so fascinating that this area will likely blossom in the decades to come. The many research questions that will emerge in the near future will not only integrate cutting-edge cognitive research but will likely also address the important phenomenon of the increased flow of information – both by migration of individuals as well as by digital connectedness. This research could have a profound impact on how we define ourselves and what we learn from our past to shape our future.

Reference

Hirst W et al. (2018). "Collective Memory from a Psychological Perspective" Trends in Cognitive Science, 22 (5): 438-451

Posted by Jalees Rehman at 12:30 AM | Permalink | Comments (0)


Monday, March 26, 2018


The Science of Tomato Flavors

by Jalees Rehman

TomatoDon't judge a tomato by its appearance. You may salivate when thinking about the luscious large red tomatoes you just purchased in your grocery store, only to find out that they are extremely bland and lack flavor once you actually bite into them after preparing the salad you had been looking forward to all day. You are not alone. Many consumers complain about the growing blandness of fruits. Up until a few decades ago, it was rather challenging to understand the scientific basis of fruit flavors. Recent biochemical and molecular studies of fruits now provide a window into fruit flavors and allow us to understand the rise of blandness.

In a recent article, the scientists Harry Klee and Denise Tieman at the University of Florida summarize some of the most important recent research on the molecular biology of fruit flavors, with a special emphasis on tomatoes. Our perception of "flavor" primarily relies on two senses - taste and smell. Taste is perceived by taste receptors in our mouth, primarily located on the tongue and discriminates between sweet, sour, salty, bitter and savory. The sensation of smell (also referred to as "olfaction"), on the other hand, has a much broader catalog of perceptions. There are at least 400 different olfactory receptors present in the olfactory epithelium – the cells in the nasal passages which perceive smells – and the combined activation of various receptors can allow humans to distinguish up to 1 trillion smells. These receptors are activated by so-called volatile organic compounds or volatiles, a term which refers to organic molecules that are vaporize in the mouth when we are chewing the food and enter our nasal passages to activate the olfactory epithelium. The tremendous diversity of the olfactory receptors thus allows us to perceive a wide range of flavors. Anybody who eats food while having a cold and a stuffy nose will notice how bland food has become, even though the taste receptors on the tongue remain fully functional.

When it comes to tomato flavors, research has shown that consumers clearly prefer "sweetness". One obvious determinant of sweetness is the presence of sugars such as glucose or fructose in tomatoes which are sensed by the taste receptors in the mouth. But it turns out that several volatiles are critical for the perception of "sweetness" even though they are not sugars but instead activate the smell receptors in the olfactory epithelium. 6-Methyl-5-hepten-2-one, 1-Nitro-2-phenylethane, Benzaldehyde and 2-Phenylethanol are examples of volatiles that enhance the positive flavor perceived by consumers, whereas volatiles such as Eugenol and Isobutyl acetate are perceived to contribute negatively towards flavor. Interestingly, the same volatiles can have no effect or even the opposite effect on flavor perception when present in other fruits. Therefore, it appears that for each fruit, the sweetness flavor is created by the basic taste receptors which sense sugar levels as well as a symphony of smell sensations activated by a unique pattern of volatiles. But just like instruments play defined yet interacting roles in an orchestra, the effect of volatiles on flavor depends on the presence of other volatiles.

This complexity of flavor perception explains why it is so difficult to define flavor. The story becomes even more complicated because individuals have different thresholds for olfactory receptor activation. Furthermore, even the volatiles linked with a positive flavor perception – either by enhancing flavor intensity or letting the consumer sense a greater "sweetness" then actually present based on sugar levels – may have varying effects when they reach higher levels. Thus, it is very difficult to breed the ideal tomato that will satisfy all consumers. But why is there this growing sense that fruits such as tomatoes are becoming blander? Have we simply not tried enough tomato cultivars? A cultivar is a plant variety that has been bred over time to create specific characteristics, and one could surmise that with hundreds or even thousands of tomato cultivars available, each of us might identify a distinct cultivar that we find most flavorful. The volatiles are generated by metabolic enzymes encoded by genes and differences between the flavor of distinct cultivars is likely a reflection of differences in gene expression for the enzymes that regulate sugar metabolism or volatiles generation.

The problem, according to Klee and Tieman, is that the customers of tomato breeders are tomato growers and not the consumers who garnish their salads or create tomato-based masalas. The goal of growers is to maximize shelf-life, appearance, disease-resistance, yield and uniformity. Breeders focus on genetically manipulating tomato strains to maximize these characteristics. The expression GMO (genetically modified organism) describes the use of modern genetic technology to modify individual genes in crops and often provokes a litany of attacks and criticisms by anti-GMO activists who fear potential risks of such genetic interventions. However, the genetic breeding and manipulation of cultivars has been occurring for centuries or even millennia using traditional low tech methods but these do not seem to provoke much criticism by anti-GMO activists. Even though there is a theoretical risk that modern genetic engineering tools could pose a health risk, there is no scientific evidence that this is actually the case. Instead, one could argue that targeted genetic intervention may be more precise using modern technologies than the low-tech genetic breeding manipulations that have led to the creation of numerous cultivars, many of whom carry the "organic, non-GMO" label.

Klee and Tieman argue that consumers prefer flavor, variety and nutrition instead of the traditional goals of growers. The genetic and biochemical analysis of tomato cultivars now offers us a unique insight into the molecular components of flavor and nutrition. Scientists can now analyze each cultivar that has been generated over the past centuries using the low-tech genetic manipulation of selective breeding and inform consumers as to their flavor footprint. Alternatively, one could also use modern genetic tools such as genome editing and specifically modify flavor components while maintaining disease-resistance and high nutritional value of crops such as tomatoes. The key to making informed, rational decisions is to provide consumers comprehensive information based on scientific evidence as to the nutritional value and flavor of fruits, as well as the actual risks of genetically modifying crops using traditional low tech methods such as selective breeding and grafting or newer methods which involve genome editing.

Reference

Klee, H. J & Denise M. Tieman (2018). The genetics of fruit flavour preferences. Nature Reviews Genetics, (published online March 2018)

Posted by Jalees Rehman at 12:35 AM | Permalink | Comments (0)


Monday, March 19, 2018


Dreams of a technocrat

by Ashutosh Jogalekar

Pid_25448Technocrats have had a mixed record in guiding major policies of the United States government. Perhaps the most famous technocrat of the postwar years was Robert McNamara, the longest serving secretary of defense who worked for both John Kennedy and Lyndon Johnson. Before joining Kennedy’s cabinet McNamara was the president of Ford Motor Company, the first person from outside the Ford family to occupy that position. Before coming to Ford, McNamara had done statistical analysis of the bombing campaign over Japan during the Second World War. Working under the famously ruthless General Curtis LeMay, McNamara worked out the most efficient ways to destroy the maximum amount of Japanese war infrastructure. On March 9, 1945, this kind of analysis contributed to the virtual destruction of Tokyo through bombing and the deaths of a hundred thousand civilians in a firestorm. While McNamara later expressed some regrets about large-scale destruction of cities, he generally subscribed to LeMay’s philosophy. LeMay’s philosophy was simple: once a war has started, you need to end it as soon as possible, and if this involves killing large numbers of civilians, so be it.

The Second World War was a transformational conflict in terms of applying the techniques of statistics and engineering to war problems. In many ways the war belonged to technocrats like McNamara and Vannevar Bush who was one of the leaders of the Manhattan Project. The success that these technocrats achieved through inventions like radar, the atomic bomb and the development of the computer were self-evident, so it was not surprising that scientists became a highly sought after voice in the corridors of power after the war. Some like Richard Feynman wanted nothing to do with weapons research after the war ended. Others like Robert Oppenheimer embraced this power. Unfortunately Oppenheimer’s naiveté combined with the beginnings of the Cold War generated paranoia and resulted in a disgraceful public hearing that stripped him of his security clearance.

After McNamara was appointed to the position by Kennedy, he began a tight restructuring of the defense forces by adopting the same kinds of statistical research techniques that he had used at Ford. Some of these techniques go by the name of operations research. McNamara’s policies led to cost reduction and consolidation of weapons systems. He brought a much more scientific approach to thinking about defense problems. One of his important successes was to change official US nuclear posture from the massive retaliation adopted by the Eisenhower administration to a strategy of more proportionate response adopted by the Kennedy administration. At this point in time McNamara was playing the role of the good technocrat. Then Kennedy was assassinated and the Vietnam War started. Lyndon Johnson put pressure on McNamara and his other advisors to expand American military presence in Vietnam.

To obey Johnson’s wishes, McNamara used the same techniques as he had before, but this time to increase the number of American troops and firepower in a remote country halfway around the world. Just like he had during the Second World War, he organized a series of bombing campaigns that laid waste not just to North Vietnamese military installations but to their dams and rice fields. Just like it had during the previous war, the bombing killed a large number of civilians without having a measurable impact on the morale or determination of Ho Chi Minh’s troops. The lessons of the Second World War should have told McNamara that bombing by itself couldn’t end a war. The man who had studied moral philosophy at Berkeley before he got ensnared by the trappings of power failed to realize that you cannot win over a nation through technology and military action. You can only do that by winning over the hearts and minds of its citizens and understanding their culture and history. Not just McNamara but most of Kennedy and Johnson’s other advisors also failed to understand this. They had reached the limits of technocratic problem solving.

William Perry seems to have avoided many of the problems that beset technocrats like McNamara. Perry was secretary of defense under Bill Clinton. His memoir is titled “My Journey at the Nuclear Brink”. As the memoir makes clear, this journey is one the entire world shares. The book is essentially a brisk and personal ride through the journey but there is little historical detail that puts some of the stories in context; for this readers would have to look at some of the references cited at the back. Perry came from a bonafide technical background. After serving at the end of the war and seeing the destruction in Tokyo and Okinawa, he returned to college and obtained bachelors and graduate degrees in mathematics. He then took the then unusual step of going to California, at a time when Silicon Valley did not exist and the transistor had just been invented. Perry joined an electronics company called Sylvania whose products started getting traction with the defense department. By this time the Cold War was in full swing, and the Eisenhower and Kennedy administrations wanted to harness the full potential of science and technology in the fight against communism. To provide advice to the government, Eisenhower set up a president’s science advisory committee (PSAC) which included accomplished scientists like Hans Bethe and George Kistiakowsky, both of whom had held senior positions in the Manhattan project.

One of the most important uses of technology was in reconnaissance of enemy planes and missiles. Perry’s company developed some of the first sensors for detecting radar signatures of Soviet ICBM’s and their transmitters. He also contributed to some of the first communication satellites and played an important role in deciphering the images of medium range nuclear missiles installed in Cuba during the Cuban Missile Crisis. Perry understood well the great contribution technology could make not just to offense but also to defense. He recognized early that electronic technology was moving from analog to digital with the invention of the integrated chip and decided to start his own company to exploit its potential. His new company built sophisticated systems for detecting enemy weapons. It was successful and ultimately employed more than a thousand people, making Perry a wealthy man. It was while heading this company that Perry was invited to serve in the administration of Jimmy Carter in the position of undersecretary of defense for research and development. He had to make a significant personal financial sacrifice in divesting himself of the shares of his and other companies in order to be eligible for government service.

Perry’s background was ideal for this position, and it was in this capacity that he made what I think was his greatest contribution. At this point in history, the Soviet Union had achieved nuclear parity with the United States. They could achieve parity by building missiles called MIRVs which could house multiple nuclear warheads on one missile and target them independently against multiple cities. The introduction of MIRVs was not banned by the ABM treaty which Nixon had signed in the early 70s. Because of MIRV’s the Soviets could now field many more nuclear weapons than they could before. The US already possessed tens of thousands of nuclear weapons, most of them at hair trigger alert. Perry wisely recognized that the response to the Soviet buildup was not a blind increase in the US nuclear arsenal. Instead it was an increase not in nuclear but in conventional forces. Over the next few years Perry saw the development of some of the most important conventional weapons systems in the armamentarium. This included the Blackbird stealth fighter which had a very small radar signature as well as smart sensors and smart bombs which could target enemy installations with pinpoint accuracy. These weapons were very useful in the first Iraq War, fought two decades later. Today Perry’s contribution remains enduring. The strength of the US military’s conventional weapons is vast and this fact remains one of the best arguments for drastically reducing America’s nuclear weapons.

When Ronald Reagan became president he adopted a much tougher stance against the Soviets. His famous ‘Evil Empire’ speech cast the Soviet Union in a fundamentally irreconcilable light while his ‘Star Wars’ speech promised the American people a system of ballistic missile defense against Soviet ICBMs. Both these announcements were deeply flawed. The Evil Empire speech was flawed from a political standpoint. The Star Wars speech was flawed from a technical standpoint. On the political side, the Soviets would only construe Reagan’s stand as an excuse to build more offensive weapons. On the technical side, it had been shown comprehensively that any defense system would be cheaply overwhelmed using decoys and countermeasures, and it would take only a fraction of the launched missiles to get through to cause terrible destruction. Standing on the outside Perry could not do much, but because of his years of experience in both weapons development and talking to leaders and scientists from other countries, he initiated what he called ‘Track 2 diplomacy’, that is diplomacy outside official channels. He established good relationships with Soviet and Chinese generals and politicians and made many trips to these two and other nations. Like others before and after him, Perry understood that some of the most important geopolitical problem solving happens at the personal level. This fact was especially driven home when Perry spent a lot of his time as secretary of defense advocating for better living conditions for American troops.

In his second term Reagan completely reversed his stand and sought reconciliation with the Soviets. This change was driven partly by his own thinking about the catastrophic consequences of nuclear war and largely by the ascendancy of Mikhail Gorbachev. As Freeman Dyson has pointed out, it's worth noting that the largest arms reductions in history were carried out by supposedly hawkish right-wing Republicans. Reagan and George H W Bush and Gorbachev dismantled an entire class of nuclear weapons. Before that, Republican president Richard Nixon had unilaterally got rid of chemical and biological weapons. Republican presidents can do this when Democratic presidents cannot because they cannot be easily accused of being doves by their own party. I believe that even in the future it is Republicans rather than Democrats who stand the best chance of getting rid of nuclear weapons. Because people like William Perry have strengthened the conventional military forces of the US so well, the country can now afford to not need nuclear weapons for deterrence.

When Bill Clinton became president Perry again stepped into the limelight. The Soviet Union was collapsing and it suddenly presented a problem of very serious magnitude. The former Soviet republics of Ukraine, Belarus and Kazakhstan suddenly found themselves with thousands of nuclear weapons without centralized Soviet authority. Many of these weapons were unsecured and loose, and rogue terrorists or states could have easily obtained access to them. Two American senators from opposing parties, Sam Nunn and Richard Lugar, proposed a plan through which the US could help the Soviets dismantle their weapons and buy the nuclear material from them. Nunn and Lugar worked with Perry and weapons expert Ash Carter to secure this material from thousands of warheads, blending it down from weapons-grade to reactor-grade. In return the US destroyed several of its own missile silos and weapons. In one of the most poignant facts of history, a sizable fraction of US electricity today comes from uranium and plutonium from Russian nuclear bombs which had been targeted on New York, Washington DC and San Francisco. The Nunn-Lugar program of denuclearizing Russia is one of the greatest and most important bipartisan triumphs in American history. It has undoubtedly made the world a safer place, and Nunn and Lugar perhaps along with Perry and his Russian counterparts surely deserve a Nobel Peace Prize for their efforts.

When Perry became secretary of defense under Clinton, much of his time was occupied with North Korea, an issue that continues to confront the world today. North Korea has been fighting an extended war with the United States and South Korea since the 1950s ever since the Korean War ended only in a truce. In the 90s the North Koreans announced that they would start reprocessing plutonium from their nuclear reactors. This would be the first step toward quickly building a plutonium bomb. Both South Korea and the US had serious concerns about this. Perry engaged in a series of diplomatic talks, some involving former president Jimmy Carter, at the end of which the North Koreans decided to forgo reprocessing in return for fuel to help their impoverished country. Perry’s accounts of North Korea contains amusing facts, such as the New York Philharmonic organizing a concert in Pyongyang and Perry entertaining a top North Korean general in Silicon Valley. Today the problem of North Korea seems serious, but it’s worth remembering that someone like Kim Jong Un who relishes such total control over his people would be reluctant to lose that control willingly by initiating a nuclear war in which his country would be completely destroyed.

The greatest problem, however, was Russia and today many of Perry’s thoughts and actions from the nineties about Russia sound prescient. After the Cold War ended, for some time US-Russia relations were at an all time high. The main bone of contention was NATO. Many former Soviet-controlled countries like Poland and Ukraine wanted to join NATO to enjoy the same security that other NATO members had. Perry was in favor of letting these countries join NATO, but he wisely understood that too rapid an assimilation of too many nations into NATO would make Russia uneasy and start seeing the US as a threat again. He proposed asking these nations to join NATO along a leisurely timeline. Against his opinion Clinton provided immediate support for NATO membership for these countries. A few years later, after George W Bush became president, partly because of US actions and partly because of Russia’s, Perry’s fears turned out to be true. The US withdrew from the ABM treaty because they wanted to put ballistic missile defense in Eastern Europe, ostensibly against Iranian ICBMs. Notwithstanding the technical flaws still inherent in missile defense, the Russians unsurprisingly questioned why the US needed this defense against a country which was still years away from building ICBMs and construed it as a bulwark against Russia. The Russians therefore started working on their own missile defense and a MIRV missile as well as new tactical nuclear weapons themselves. Unlike high-yield strategic weapons which can wipe out cities, low-yield tactical weapons ironically increase the probability of nuclear war since they can be used locally on battlefields. When Obama became president of the United States and Medvedev became president of Russia, there was a small window of hope for reduction of nuclear weapons on both sides, but the election of Putin and Trump has dimmed the chances of reaching an agreement in the near future. North Korea has also gone nuclear by conducting a nuclear test in 2006.

Perry’s greatest concern throughout his career has been to reduce the risk of nuclear war. He thinks that nuclear war is quite low on the list of public concerns, and this is a strange fact indeed. Even a small nuclear bomb used in a major city would lead to hundreds of thousands of deaths and severe social and economic disruption. It would be a catastrophe unlike any we have faced until now and would make 9/11 look like child’s play. With so many countries having nuclear weapons, even the small risk of a rogue terrorist stealing a weapon is greatly amplified by the horrific consequences. If nuclear weapons are such a serious problem, why are they largely absent from the public consciousness?

It seems that nuclear weapons don’t enter the public consciousness because of a confluence of factors. Firstly, most of us take deterrence for granted. We think that as long as most countries have nuclear weapons, mutually assured destruction and rationality would keep us safe. But this is little more than a false sense of security; mutually assured destruction is not a rational strategy, it is simply an unfortunate reality that emerged from our collective actions. We are very lucky that no nuclear attack has taken place after Nagasaki, but there have been scores of nuclear accidents that almost led to bombs being exploded, some near American cities. The book “Command and Control” by Eric Schlosser describes dozens of such frightening accidents. Just a few years ago there was an incident in which American military planes flew from North Dakota to Louisiana without realizing that there were nuclear bombs onboard. In addition, even during events like the Cuban Missile Crisis, the world came very close to nuclear war, and a slight misunderstanding could have triggered a nuclear launch: in fact it is now widely acknowledged that dumb luck played as big a role in the crisis not escalating as any rational action. There are also false alarms, one of which Perry recollects: an accidental playing of a training exercise tape led a general to the erroneous conclusion that two hundred nuclear tipped missiles were heading from the Soviet Union toward the US. Fortunately it was discovered that this was a false alarm in seconds, but if it had not, according to protocol American ICBMs would have been launched against Russia within minutes, and the Russians would have retaliated massively. The problem with nuclear weapons is that the window of prevention is very small, and therefore accidents are quite likely. The reason the American public does not fear nuclear weapons as much as it should is because it sees that the red line has never been crossed and it believes that the line will never be crossed, but it does not see how close we already came to crossing it.

Secondly, the media is much more concerned with reporting on the latest political or celebrity scandal and important but much less precipitous problems like climate change rather than on nuclear weapons. Of the two major problems confronting humanity – nuclear war and climate change – I believe nuclear war is the more urgent. The impacts of climate change are mixed, longer term and more unpredictable. The impacts of nuclear war are unambiguously bad, immediate and more predictable. Unfortunately climate change especially has been an obsession with both the media and the public in spite of its uncertainties, whereas the certain consequences of a nuclear attack have been ignored by both. The supposed dangers of climate change have been widely publicized by self-proclaimed prophets like Al Gore, but there are no such prophets publicizing the dangers of nuclear weapons. For one reason or another, both the public and the media consider nuclear weapons to be a low priority because no nuclear accident has happened during the last fifty years, but they keep on ignoring the very high costs of even a low risk attack. If nuclear weapons received the kind of massive publicity that global warming has received, there is no doubt that they too would loom large on everyone’s mind.

Changing attitudes is hard, although Perry certainly has tried. Nuclear weapons were born of science, but their solution is not technical. With his colleagues Sam Nunn, George Schultz, Henry Kissinger and Sidney Drell, Perry started an initiative whose goal is the reduction of nuclear weapons through both official and unofficial diplomacy. All four of these people have had deep experience with both nuclear weapons and diplomacy. Encouraging economic and trade relationships between traditional rivals like India and Pakistan for instance would be a key strategy in reducing the risk of nuclear conflict between such nations: one reason why an actual war between the US and China is highly unlikely is because both countries depend heavily on each other for economic benefits. The key objective in caging the nuclear genie is to remind nations of their common security and the fact that individual lives are precious on all sides. During the Cold War, it was only when the US and the Soviet Union recognized that even a “win” for one country in a nuclear war would involve large-scale destruction of both countries did they finally realize how important it was to cooperate.

Finally, Perry has made it his life’s goal to educate young people about these dangers, both through his classes at Stanford University as well as through his website. The future is in these young people’s hands, and as much of the world including Russia seems to be reverting to the old ways of thinking, it’s young people whose minds are unspoiled by preconceived notions who give us our best chance of ridding the world of the nuclear menace.

Posted by Ashutosh Jogalekar at 01:40 AM | Permalink | Comments (0)


Monday, February 26, 2018


Scheming Like A State

by Misha Lepetic

"A grandes problemas, 
¡grandes soluciones!
"
 ~ Nicolás Maduro

 

Bc4One area where proponents of technology customarily get trounced concerns the consideration of unintended consequences. (This is also regrettably true for most commentators.) It's not that people don't take them into account, but rather that when they do, those consequences are extrapolated out to such hyperbolic extremes as to make these scenarios essentially useless. It's much more appealing and click-friendly to sound the alarm that artificial intelligence will turn the entire planet into an ocean of paper clips than it is to think deeply about how AI already influences decisionmaking within our existing social systems. By the same token, it's easier to be terrified of sudden, wholesale unemployment wrought by automation, when, as I noted recently, the far likelier outcome is that we will coexist with technology for a good while yet, with automation eroding work in a gradual, almost invisible fashion. And this is notwithstanding the fact that that there is plenty of room for a well-informed skepticism that questions whether technological unemployment is happening in any appreciable way at all.

Similarly, when proponents of bitcoin, blockchain and distributed technologies advocate for a wave of technologically-driven decentralization, it's rarely described in terms less than messianic. Unintended consequences seem to have gotten only so far as admitting an overenthusiastic consumption of electricity. Now, one of the principal targets of this revolution is the perceived tyranny of the nation-state itself. Janina Lowisz, the (alleged) first holder of an ID that is written into the blockchain, said in an interview with Vice that: 

The technology allows for a lot of new possibilities for replacing what the state provides—like, one option would be to offer government services in packages so people can pay for whatever services they're going to use. That's how government should work: Instead of paying taxes that get wasted on things you don't even want, this way you can have a free choice and see exactly where your money is going.

I'll graciously pass over the heartstopping naïveté of this sentiment, but it's worth noting that this model of 'government as a cable TV package subscription' has deep roots. It is a direct ideological descendant of the ur-text of techno-libertarianism, John Perry Barlow's A Declaration of the Independence of Cyberspace, the charismatic and hopelessly romantic 1996 manifesto that Barlow delivered in Davos, of all places. For the purposes of this essay, I have pulled out the following bits:

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours...You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces...We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.

Blockchain technologies pretty much take dead aim at fulfilling this mandate. Bitcoin's fair face, and the 1,518 ships that it has thus far launched, revel in the notion of decentralized authority. Political vicissitudes cannot devalue these aspiring currencies, and, in their purest forms, transacting in them means immunity from censorship or any other regulatory or even geopolitical restrictions. Except, no one has really thought to ask the nation-states what they think about this.

*

Bc2Unsurprisingly, regulatory forces have begun massing at the frontiers of Cryptolandia. During 2016 and especially 2017, the blockchain sector was seized by the fever-dream known as Initial Coin Offerings (ICOs). Functioning as a nudge-nudge-wink-wink imitation of the more staid Initial Public Offering, an ICO involves a startup issuing 'tokens' that award participation in that venture's activities. You can think of it as a form of scrip: just as passengers receive frequent flier miles that then allow them to purchase discounted tickets for that airline (but not, for example, towards groceries), tokens quantify the possibility of transacting within the network of that particular blockchain. Such activities may involve governance (voting with your tokens), trading services (the number of tokens I have may advertise the amount of hard drive space I can rent out), or pretty much anything else you can think of. This may or may not turn out to be fantastically useful, but in 2016 and 2017 the primary driver of interest in ICOs was rampant speculation, and there was plenty of it. 

Following the catastrophic 2016 hack of the DAO, a virtual investment organization whose decisions were meant to be driven entirely by blockchain technology, the US government's Securities and Exchange Commission decided it was time to throw down some law. According to the SEC, any token issued by a blockchain venture must be subjected to the Howey Test, to determine whether the token is in fact a security. Contrary to popular belief, the Howey Test exemplifies the axiom that brevity is the soul of regulation (see also the original text of the Glass-Steagall Act). Thus, an instrument is a security if:

  1. It is an investment of money;
  2. There is an expectation of profits from the investment;
  3. The investment of money is in a common enterprise; and
  4. Any profit comes from the efforts of a promoter or third party

That's pretty much it. The genius of the Howey Test is that it's concerned with what an instrument does, and not how it describes itself. It also means that most tokens will fall under this rubric. This is, all in all, a very good thing, as there has been quite a lot of snake-oil selling going on, and punters have been regularly fleeced, BitConnect being a recent and typical example. Thus, an ICO whose token is deemed a security must be registered as such with the SEC, or be granted an exemption; otherwise it is not available to US investors. On the other hand, it sets up an immediate opportunity for regulatory arbitrage, where other nation-states can seek to take advantage of the SEC's rectitude, and allow ICOs to proceed in their jurisdictions with little or no oversight. Thus begins another grand game of capital versus regulation, but very much along the lines of all the other grand games that have preceded it.

*

Bc3Of course, this is not the entire story. The code that underpins the blockchain is the result of a massively collective effort, which seems somewhat at odds with the libertarian ethos. Nevertheless, this availability is a major factor in its success so far, at least from the point of view of generating massive network effects with breathtaking speed. The notion that anyone can improve or stress-test code and contribute to its security and speed is key to ensuring the integrity and ongoing innovativeness of the entire ecosystem. At the same time, this radical openness has led to instances where entire platforms such as Ethereum have been cloned (or ‘forked') by banks for use in private blockchains. It's important to note that this doesn't contravene any of the principles set out by purists, but is perhaps less salutary than they might prefer. Still, as Mao taught us, we ought to let a thousand flowers bloom, if only for a short while.

More interestingly, state actors themselves have taken notice. If any two-bit hack with a white paper can raise millions of dollars in the blink of an eye, why shouldn't they? Unsurprisingly, it's the states who find themselves at the margins of the global capital community that have found the notion of cryptocurrency most irresistible. Of that cohort, Venezuela is first out the gate, floating the petro, a cryptocurrency that is putatively tied to its oil reserves. Immediately following the start of the pre-sale on Tuesday, Venezuelan President Nicolás Maduro triumphantly tweeted that the state had raised "4.77 billion yuan, or 735 million dollars" (due to sanctions, Venezuela denominates its oil in yuan, but it's still a nice little dig, putting the dollar in second place like that).

Still, a little digging into this sale reveals some uncomfortable questions. One of the characteristics of nearly all blockchains is that transactions are visible. You may not know the names of parties, but you will see that X tokens were bought with Y currency, and went from account A to account B. It seems that there hasn't been much movement in this particular chain. In fact, according to this article, there hasn't been any at all, and all the tokens are still sitting in the original pre-sale account. That's like saying you sold out your entire inventory while it's still stacked up right behind the counter. An ambiguous process for purchasing – as detailed in this thread – has also contributed to a rotten smell, but perhaps the most damning detail is about the asset itself. 

As noted above, the core idea behind the sale was to tie each token to a barrel of Venezuelan oil, but this is not a futures contract. One does not stroll up to an exchange and redeem a token for said commodity. Rather, the token can be redeemed for the value of a barrel, and furthermore that value is in bolivares, a currency so whipped by hyperinflation that 18 months ago it passed the in-game gold of World of Warcraft on its way down and hasn't stopped cratering since. So it shouldn't come as a surprise to Venezuelans that if they only have bolivares, they can't buy petros. 

This is perhaps the first case, but certainly not the last, of a nation-state using crypto in a desperate attempt to raise capital in the face of robust international sanctions; indeed, a fully subscribed sale would only raise about 10% of what Venezuela owes its creditors. Thus Maduro has also announced that the government would soon be launching another cryptocurrency: this one will be gold-backed, although he didn't specify whose gold would be doing the backing. The pattern of state-sponsored chicanery is only reinforced by news that Venezuela is forcing local bitcoin miners to register with the state, and extorting and/or arresting those citizens who attempt to run mining rigs on the sly. In a country where the currency is worthless, mining bitcoin is for some people the only way to feed their families. For some libertarians this is perhaps proof positive that the state is in its death throes, but it's also clear that it's sure not going out without a fight.

* 

Bc1As tragic as it may be for its citizenry, Venezuela's incompetence and/or cynicism would be easier to dismiss if it wasn't for another detail that points to a much larger panorama. Exchanges are another feature of the blockchain landscape; after all, once you manage to buy tokens, you need to be able to trade them for other tokens or even (gasp) fiat currency. So far, hundreds of exchanges have sprouted up all around the world. This has led to a fragmented landscape, with a dearth of liquidity and all sorts of hacks and other malarkey as accounts get compromised. In the case of the petro, rumor has it that exchange services will be provided by Zeus Exchange. Who are these guys? 

Russian startup Zeus Exchange, has registered in Singapore and become licensed in Cyprus to trade shares using the smart asset blockchain—the world's first—developed by Singapore-based foundation, NEM. The not-for-profit foundation is developing its systems in China. The framework that Zeus Exchange will use could be particularly successful in China due to it allowing access to exchanges all around the world via Cyprus. Plus anonymously, and at small scale. [sic]

So: a dodgy Venezuelan cryptocurrency is set to trade on a Russian-run exchange licensed in Cyprus, targeting the retail Chinese market. The hubs that we are accustomed to hearing about when we talk about global capital flows – New York, London, Frankfurt – are nowhere in sight, and the regulatory hammer has yet to drop in any substantial way. The implication is that cryptocurrency and its underlying technology is on the verge of enabling a global shadow financial system. 

There are two reasons why this is a big deal. Consider that nation-based capital controls have been largely removed from the global economy since the 1990s. Much like climate change has led to more frequent and more intense hurricanes, removing these capital controls has seen a corresponding increase in the frequency and intensity of financial crises. Would crypto – traded 24 hours a day, across a mosaic of incompatible regulatory regimes – do anything but catalyze even faster capital flight from crisis situations, or capital influx into speculative bubbles? 

Secondly, this article in Foreign Policy asks us to consider the implications of blockchain on sanctions as a tool for foreign policy: 

Since Sept. 11, 2001, the United States has relied heavily on financial sanctions to rein in bad actors. Whether targeting terrorists, Iran's and North Korea's quest for nuclear weapons, Russia's annexation of the Crimean peninsula, or Venezuela's rights abuses, financial sanctions are often the first resort for U.S. policymakers. And while other actors often help reinforce U.S. financial sanctions — the European Union leaned on Iran, and China is pressuring North Korea — the fact that New York and the U.S. dollar sit at the epicenter of global finance gives the United States outsized leverage.

Even if cryptocurrencies like bitcoin don't replace dollars, the 'trustless' aspect of blockchain technology allows actors who do not know or trust one another to still engage in transactions with each other. It's one of the keys that makes a truly decentralized platform viable. At the same time, obviating a trusted third party removes the choke point that makes much enforcement of regulation possible. So it's not difficult to envision the nations mentioned above using this to cobble together a system of payments that would be not just independent of the current regime (known as SWIFT) but also be essentially inaccessible to US and European financial authorities. 

Within the context of radical decentralization, money laundering and financing of terrorism, etc, become that much more difficult to track and stop. Already, it's quite likely that North Korea has been mining bitcoin in an attempt to bolster its access hard currency. And just last week Iran announced that its central bank was exploring the possibility of launching a 'cloud-based cryptocurrency'. Looking back on the interview with Janina Lowisz, when she says "I really can't think of a good government anywhere," the only response I can come up with is, "Yeah, and?"

Posted by Misha Lepetic at 12:30 AM | Permalink | Comments (0)

“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

by Jalees Rehman

CoffeeOne of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that "scientists cannot make up their minds" – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper "Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings". Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer's disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were "hyped" or "factual", whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were "hyped". One example of a "hyped" headline was "Magic key to breast cancer fight" instead of using a more factual headline such as "Scientists pinpoint genes that raise your breast cancer risk". Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit "replication statements" such as "Tests on larger populations of adults must be performed" or "More work is needed to confirm the findings".

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with "hyped" headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. "Hyped" headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with "hyped" headlines and without replication statements. In other words – this research on "replication statements" in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of "hyped" headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. "Hyped" newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings. Science Communication, 40(1), 124-141.

Posted by Jalees Rehman at 12:20 AM | Permalink | Comments (0)


Monday, February 19, 2018


Bridging the gaps: Einstein on education

by Ashutosh Jogalekar

Albert-einstein-9285408-1-402The crossing of disciplinary boundaries in science has brought with it a peculiar and ironic contradiction. On one hand, fields like computational biology, medical informatics and nuclear astrophysics have encouraged cross-pollination between disciplines and required the biologist to learn programming, the computer scientist to learn biology and the doctor to know statistics. On the other hand, increasing specialization has actually shored up the silos between these territories because each territory has become so dense with its own facts and ideas.

We are now supposed to be generalists, but we are generalists only in a collective sense. In an organization like a biotechnology company for instance, while the organization itself chugs along on the track of interdisciplinary understanding across departments like chemistry, biophysics and clinical investigations, the effort required for understanding all the nuts and bolts of each discipline has meant that individual scientists now have neither the time nor the inclination to actually drill down into whatever their colleagues are doing. They appreciate the importance of various fields of inquiry, but only as reservoirs into which they pipe their results, which then get piped into other reservoirs. In a metaphor evoked in a different context - the collective alienation that technology has brought upon us - by the philosopher Sherry Turkle, we are ‘alone together’.

The need to bridge disciplinary boundaries without getting tangled in the web of your own specialization has raised new challenges for education. How do we train the men and women who will stake out new frontiers tomorrow in the study of the brain, the early universe, gender studies or artificial intelligence? As old-fashioned as it sounds, to me the solution seems to go back to the age-old tradition of a classical liberal education which lays emphasis more on general thinking and skills rather than merely the acquisition of diverse specialized knowledge and techniques. In my ideal scenario, this education would emphasize a good grounding in mathematics, philosophy (including philosophy of science), basic computational thinking and statistics and literature as primary goals, with an appreciation of the rudiments of evolution and psychology or neuroscience as preferred secondary goals.

This kind of thinking was on my mind as I happened to read a piece on education and training written by a man who was generally known to have thought-provoking ideas on a variety of subjects. If there was one distinguishing characteristic in Albert Einstein, it was the quality of rebellion.

In his early days Einstein rebelled against the rigid education and rules of the German Gymnasium system. In his young and middle years he rebelled against the traditional scientific wisdom of the day, leading to his revolutionary contributions to relativity and quantum theory. In his old age he rebelled against both an increasingly jingoistic world as well as against the mainstream scientific establishment.

Not surprisingly, then, Einstein had some original and bold thoughts on what an education should be like. He held forth on some of these in an address on October 15, 1931 delivered at the State University of New York at Albany. 1931 was a good year to discuss these issues. The US stock market had crashed two years before, leading to the Great Depression and mass unemployment. And while Hitler had not become chancellor and dictator yet, he would do so only two years later; the rise of fascism in Europe was already evident.

Some of these issues must have been on Einstein’s mind as he first emphasized what he had already learnt from his own bitter Gymnasium experience, the erosion of individuality in the face of a system of mass education, similar to what was happening to the erosion of individuality in the face of authoritarian ideas.

“Sometimes one sees in the school simply the instrument for transferring a certain maximum quantity of knowledge to the growing generation. But that’s not right. Knowledge is dead; the school, however, serves the living. It should develop in the young individuals those equalities and capabilities which are of value for the welfare of the commonwealth. But that does not mean that individuality should be destroyed and the individual becomes a mere tool of the community, like a bee or an ant. For a community of standardized individuals without personal originality and personal aims would be a poor community without possibilities for development. On the contrary, the aim must be the training of independently thinking and acting individuals, who, however, see in the service of the community their highest life problem…To me the worst thing seems to be for a school principally to work with methods of fear, force, and artificial authority. Such treatment destroys the sound sentiments, the sincerity, and the self-confidence of the pupil. It produces the submissive subject. It is not so hard to keep the school free from the worst of all evils. Give into the power of the teacher the fewest possible coercive measures, so that the only source of the pupil’s respect for the teacher is the human and intellectual qualities of the latter.”

Einstein also talks about what we can learn from Darwin’s theory. In 1931 eugenics was still quite popular, and Darwin’s ideas were seen even by many social progressives as essentially advocating the ruthless culling of ‘inferior’ individuals and the perpetuation of superior ones. Where Einstein came from, this kind of thinking was on flagrant display right on the doorstep, even if it hadn’t already morphed into the unspeakable horror that it did a decade later. Einstein clearly rejects this warlike philosophy and encourages cooperation over competition. Both cooperation and competition are important for human progress, but the times clearly demanded that one not forget the former.

“Darwin’s theory of the struggle for existence and the selectivity connected with it has by many people been cited as authorization of the encouragement of the spirit of competition. Some people also in such a way have tried to prove pseudo-scientifically the necessity of the destructive economic struggle of competition between individuals. But this is wrong, because man owes his strength in the struggle for existence to the fact that he is a socially living animal. As little as a battle between single ants of an ant hill is essential for survival, just so little is this the case with the individual members of a human community…Therefore, one should guard against preaching to the young man success in the customary sense as the aim of life. For a successful man is he who receives a great deal from his fellow men, usually incomparably more than corresponds to his service to them. The value of a man, however, should be seen in what he gives and not what he is able to receive.”

In other words, with malice toward none, with charity toward all.

And what about the teachers themselves? What kinds of characters need to populate the kind of school which imparts a liberal and charitable education? Certainly not the benevolent dictators that filled up German schools in Einstein’s time or which still hold court in many schools across the world which emphasize personal authority over actual teaching.

“What can be done that this spirit be gained in the school? For this there is just as little a universal remedy as there is for an individual to remain well. But there are certain necessary conditions which can be met. First, teachers should grow up in such schools. Second, the teacher should be given extensive liberty in the selection of the material to be taught and the methods of teaching employed by him. For it is true also of him that pleasure in the shaping of his work is killed by force and exterior pressure.”

If Einstein’s words have indeed been accurately transcribed, it is interesting to hear him use the words “grow up” rather than just “grow” applied to teachers. I have myself come across stentorian autocrats who inadvertently reminded students that their charges were in fact the adults in the room. They definitely need to grow up. Flexibility in the selection of the teaching material is a different matter. To do this it’s not just important to offer as many electives as possible, but it’s more important to give teachers a wide berth within their own classes rather than constantly being required to subscribe to a strictly defined curriculum. Some of the best teachers I had were ones who spent most of their time on material other than what was required. They might wax philosophical about the bigger picture, they might tell us stories from the history of science, and one of them even took us out for walks where the topics of discussion consisted of everything except what he was ‘supposed’ to teach. It is this kind of flexibility in teaching that imparts the most enriching experience, but it’s important for the institution to support it.

What about the distinction between natural science and the humanities? Germany already had a fine tradition in imparting a classical education steeped in Latin and Greek, mathematics and natural science, so not surprisingly Einstein was on the right side of the debate when it came to acquiring a balanced education.

“If a young man has trained his muscles and physical endurance by gymnastics and walking, then he will later be fitted for every physical work. This is also analogous to the training of the mental and the exercising of the mental and manual skill. Thus the wit was not wrong who defined education in this way: “Education is that which remains, if one has forgotten everything he has learned in school.” For this reason I am not at all anxious to take sides in the struggle between the followers of the classical philologic-historical education and the education more devoted to natural science.”

The icing on this cake really is Einstein’s views on the emphasis on general ability rather than specialized knowledge, a distinction which is more important than ever in our age of narrow specialization.

“I want to oppose the idea that the school has to teach directly that special knowledge and those accomplishments which one has to use later directly in life. The demands of life are much too manifold to let such a specialized training in school appear possible. Apart from that, it seems to me, moreover, objectionable to treat the individual like a dead tool. The school should always have as its aim that the young man leave it as a harmonious personality, not as a specialist. This in my opinion is true in a certain sense even for technical schools, whose students will devote themselves to a quite definite profession. The development of general ability for independent thinking and judgement should always be placed foremost, not the acquisition of special knowledge. If a person masters the fundamentals of his subject and has learned to think and work independently, he will surely find his way and besides will better be able to adapt himself to progress and changes than the person whose training principally consists in the acquiring the detailed knowledge.”

One might argue that it’s the failure to let young people leave college as ‘harmonious personalities’ rather than problem-solvers that leads to a nation of technocrats and operational specialists of the kind that got the United States in the morass of Vietnam, for instance. A purely problem-solving outlook might enable a young person to get a job sooner and solve narrowly defined problems, but it will not lead them to look at the big picture and truly contribute to a productive and progressive society.

I find Einstein’s words relevant today because the world of 2018 in some sense resembles the world of 1931. Just like it did because of the Great Depression then, mass unemployment because of artificial intelligence and automation is a problem looming on the short horizon. Just like it had in 1931, authoritarian thinking seems to have taken root in many of the world’s governments. The specialization of disciplines has led colleges and universities to increasingly specialize their own curricula, so that it is now possible for many students to get through college without acquiring even the rudiments of a liberal arts education. C. P. Snow’s ‘Two Cultures’ paradoxically have become more entrenched, even as the Internet presumably promised to break down barriers between them. Meanwhile, political dialogue and people's very world-views across the political spectrum have gotten so polarized on college campuses that certain ideas are now being rejected as biased, not based on their own merits but on some of their human associations.

These problems are all challenging and require serious thinking and intervention. There are no easy solutions to them, but based on Einstein’s words, our best bet would be to inculcate a generation of men and women and institutional structures that promote flexible thinking, dialogue and cooperation, and an open mind. We owe at least that much to ourselves as a supposedly enlightened species.

Posted by Ashutosh Jogalekar at 12:45 AM | Permalink | Comments (0)


Monday, December 04, 2017


Neuroprediction: Using Neuroscience to Predict Violent Criminal Behavior

by Jalees Rehman

NeuropredictionCan neuroscience help identify individuals who are most prone to engage in violent criminal behavior? Will it help the legal system make decisions about sentencing, probation, parole or even court-mandated treatments? A panel of researchers lead by Dr. Russell Poldrack from Stanford University recently reviewed the current state of research and outlined the challenges that need to be addressed for "neuroprediction" to gain traction.  The use of scientific knowledge to predict violent behavior is not new. Social factors such as poverty and unemployment increase the risk for engaging in violent behavior. Twin and family studies suggest that genetic factors also significantly contribute to antisocial and violent behavior but the precise genetic mechanisms remain unclear. A substantial amount of research has focused on genetic variants of the MAOA gene (monoamine oxidase A, an enzyme involved in the metabolism of neurotransmitters). Variants of MAOA have been linked to increased violent behavior but these variants are quite common – up to 40% of the US population may express this variant! As pointed out by John Horgan in Scientific American,  it is impossible to derive meaningful predictions of individual behavior based on the presence of such common gene variants.

One fundamental problem of using social and genetic predictors of criminal violent behavior in the legal setting is the group-to-individual problem. Carrying a gene or having been exposed to poverty as a child may increase the group risk for future criminal behavior but it tells us little about an individual who is part of the group. Most people who grow up in poverty or carry the above-mentioned MAOA gene variant do not engage in criminal violent behavior. Since the legal system is concerned with an individual's guilt and his/her likelihood to commit future violent crimes, group characteristics are of little help. This is where brain imaging may represent an advancement because it can assess individual brains. Imaging individual brains might provide much better insights into a person's brain function and potential for violent crimes than more generic assessments of behavior or genetic risk factors.

Poldrack and colleagues cite a landmark study published in 2013 by Eyal Aharoni and colleagues in which 96 adult offenders underwent brain imaging with a mobile MRI scanner before being released from one of two New Mexico state correctional facilities. The prisoners were followed for up to four years after their release and the rate of being arrested again was monitored.

This study found that lower activity in the anterior cingulate cortex (ACC- an area of the brain involved in impulse control) was associated with a higher rate being arrested again (60% in participants with lower ACC activity, 46% in those with higher ACC activity). The sample size and rate of re-arrest was too small to see what the predictive accuracy was for violent crime re-arrests (as opposed to all re-arrests). Poldrack and colleagues lauded the study for dealing with the logistics of performing such complex brain imaging studies by using a mobile MRI scanner at the correctional facilities as well as prospectively monitoring their re-arrest rate. However, they also pointed out some limitations of the study in terms of the analysis and the need to validate the results in other groups of subjects.

Brain imaging is also fraught with the group-to-individual problem. Crude measures such as ACC activity may provide statistically significant correlations for differences between groups but do not tell us much about how any one individual is likely to behave in the future. The differences in the re-arrest rates between the high and low ACC activity groups are not that profound and it is unlikely that they would be of much use in the legal system. So is there a future for "neuroprediction" when it comes to deciding about the sentencing or parole of individuals?

Poldrack and colleagues outline some of the challenges of brain imaging for neuroprediction. One major challenge is the issue of selecting subjects. Many people may refuse to undergo brain imaging and it is quite likely that those who struggle with impulse control and discipline may be more likely to refuse brain scanning or move during the brain scanning process and thus distort the images. This could skew the results because those most likely to succumb to impulse control may never be part of the brain imaging studies. Other major challenges include using large enough and representative sample sizes, replicating studies, eliminating biases in the analyses and developing a consensus on the best analytical methods. Addressing these challenges would advance the field.

It does not appear that neuroprediction will become relevant for court cases in the near future. The points outlined by the experts remind us that we need to be cautious when interpreting brain imaging data and that solid science is required for rushing to premature speculations and hype about using brain scanners in court-rooms.

Reference

Poldrack RA et al. (2017). "Predicting Violent Behavior:What Can Neuroscience Add? Trends in Cognitive Science, (in press).

Posted by Jalees Rehman at 12:20 AM | Permalink | Comments (0)


Monday, November 13, 2017


Remembrance: a Catharsis

by Humera Afridi

On a frigid winter afternoon in February, in a western suburb of Paris, I stood outside the 17th century home of the last female survivor of the Special Operations Executive, a clandestine British organization, also known as Churchill's Secret Army, or the Ministry of Ungentlemanly Warfare. During World War II, the SOE plotted dazzling acts of sabotage against Hitler's war effort through espionage and propaganda. Their guerrilla campaign was critical to the outcome of the war.   Webp.net-resizeimage

Having rung the bell, I waited in a bemused trance for the 91-year old veteran, incredulous that I would meet her. In the quiet of the countryside, I discerned the faint sound of yapping dogs from beyond the high stone wall. A month earlier, an envelope with her name had slipped out of a folder amid the papers of a Dutch relative of Noor Inayat Khan, an undercover radio operator 
recruited by the SOE to serve in the Resistance. A tremor went through me as I examined the handwritten chit from twenty years earlier describing the terrible torture that Noor had endured at the hands of the Gestapo at the Dachau concentration camp before she was executed along with three other SOE women on September 13, 1944.

The note was addressed to a mureed, or spiritual disciple, of the Sufi Order International— the Sufi mystical organization founded by Noor's father Hazrat Inayat Khan in Europe—who had, in turn, shared it with Noor's cousin at The Hague, whom I was visiting. I assumed that the author of this note was dead like everyone else I wished to meet who had known Noor. Days after my return to New York, as I was sitting at my dining table my eyes grazed the spine of a book, The Secret Ministry of Ag and Fish, authored by none other than Noreen Riols.

A witty memoir of her time working as a decoy in the SOE under Colonel Maurice Buckmaster, head of F (for French) section, to which Noor had belonged, the book's discovery felt nothing less than divine intervention. Riols had also worked at Beaulieu, the famous training school for secret agents that Noor attended. I was flummoxed. For the life of me, I couldn't trace how the book had arrived on my bookshelf. I scrambled to contact her publisher, relieved to discover Noreen Riols was very much alive, this woman whose first name is phonetically similar to Noor's in an uncanny assonance that seemed to further intertwine their SOE destinies.

I hold a glowing admiration for Noor: her sensitivity as a poet, author, artist and talented musician lived alongside the fierce spirit of the war heroine that she would become. Her covert and meticulous radio transmissions; her stubborn refusal to give any information to the Nazis— not even her own true name— over the course of ten months of torture and being shackled in solitary confinement; her adherence to the highest chivalrous ideals through the most frightening and ugliest of circumstances imaginable, elevated her, in my eyes, to saintly status. But, as I sat on a plush pink sofa in Mrs. Riols's handsome drawing room I sucked in my breath.  Webp.net-resizeimage (7)

"Noor should never have been sent," piped my elegant, silver-haired host in a high-pitched tone that fully expressed her disapproval. "She disobeyed orders. Buck asked her to return. When you're in the army, you don't disobey, you listen. Look what she got herself into! She was extremely brave in the end. But she wouldn't have had to be if she'd obeyed."   

 On June 16, 1943, the sky bathed in the light of a full moon, Noor was flown by Lysander into a field by the banks of the Loir, southwest of Paris, the first female radio operator of the Resistance to be infiltrated into occupied France. Unbeknownst to her, she was received in the field by an infamous double agent, a charismatic man whose loyalty lay solely with himself. Noor had just barely finished her training when she was dispatched. Her evaluations by War Office officials had been wildly conflicting— "too emotional and too impulsive… too vulnerable"; "in spite of a great gentleness of manner seemed to have an intuitive sense of what might be in mind for her to do…"; "too highly strung and too nervous;" "not overburdened with brains"; "a fine spirit glowing in her." But Noor was an adroit radio operator and spoke French fluently, skills that were invaluable to the SOE's F Section.   Webp.net-resizeimage (5)

She had been sent to France to work in the Cinema circuit, a new off-shoot of the large and influential Prosper network of secret agents headed by Major Francis Suttill who had decided, for reasons of security, to break the unwieldly reseau into smaller circuits. But days after Noor arrived, the Prosper circuit was blown. Sutthill, along with several agents, was captured by the Gestapo and the network instantly collapsed. Suddenly, Noor, as radio operator, was the sole link between Britain and the Resistance. Hers become the "principal and most dangerous post" according to Colonel Buckmaster. Refusing orders to return— feeling it her moral duty to stay and serve the cause of liberty— Noor operated for twelve weeks, tapping out messages on her portable B MK II transmitter, cycling around the city, moving locations, and dodging close calls with the Nazis.

And then: mere days before Noor was scheduled to return to London, she was betrayed by the sister of the head of her circuit. Under the watchful eyes of the Gestapo, Noor was made to transmit encrypted messages to headquarters. She did so, adding a security check meant to alert Baker Street of her capture. Colonel Buckmaster, to tragic consequences, ignored the security check.

Noor had worked without the protection of a uniform or the safeguard of the Geneva Convention, receiving a salary of £350 sterling a year, deposited quarterly to an account at Lloyds Bank, Southampton Row, Victoria House, a few minutes' walk from where her American mother lived at 4 Tavington Street, absolutely unaware that her daughter was working as an undercover agent deployed to France. The true extent of Noor's bravery was revealed well after the war as testimonies arrived from people who had met her in prison. Noor's bust in Gordon Square, London, set on a plinth of natural stone, cites her honors—M.B.E (Member of the Most Excellent Order of the British Empire), G.C. (George Cross), and Croix de Guerre (awarded by France). Noor is just one of four women to be awarded the George Cross medal for gallantry by Britain. Plaques memorialize her at Knightsbridge, London; the Air Forces Memorial, Runnymede; Suresnes, France, and at the Dachau Concentration Camp Memorial site in Germany. The monument at Gordon Square, Bloomsbury, is particularly moving as the family lived in the neighborhood for a time between the years of 1914 to 1920.

This Remembrance Day, I am reminded of a poignant remark by Mrs. Riols. "It's often the ones who die who get the recognition. But what about the ones who come back; the ones who live?" she'd mused, her voice an alloy of rue and stoicism. "We were all ignored after the war," she stated.   Webp.net-resizeimage (6)

Remembrance is a mission for Mrs. Riols who has played an active role in organizing an annual memorial service on May 6 at Valencay for the SOE agents of French section. "When I read out the names, I can see their faces," she said to me. "If I'd gone, perhaps my name would have been on that plaque, too." Her voice was wistful. The monument—two adjacent columns, black and white, linked by a sphere representing the moon—was unveiled in 1991 and symbolized the partnership between SOE and the Resistance. The names of 104 agents are inscribed on the memorial—39 were women, 13 of whom never returned, tortured to death or killed in concentration camps.   

Major Suttill's younger son, Francis Suttill, named after his father, has made remembrance the project of his life's work. After capture, Major Suttill's fate remained unknown for decades until a filmmaker approached Suttill and told him his father was alive and at large. "It was complete nonsense I discovered," Suttill, who saw his father for the last time when he was three years old, told me. But the filmmaker's comment instigated a desire to discover the truth and after intensive travel and research, Suttill learned that his father had been executed at Sachsenhausen concentration camp in March 1945. "It was a cathartic moment; moving," he states. His book, Shadows in the Fog: The True Story of Major Suttill and the Prosper French Resistance Network, describes his findings and is "a monument to not just my father but to all in the network." Major Suttill was posthumously awarded the Distinguished Service Order in Britain, but has received no recognition from France. "If I can get my father recognized in France, that would be my life's work for him completed," Suttill said to me by telephone.

Mrs. Riols has not yet been decorated in her native England but has, in the last decade, received a series of awards from the French government—including the Medaille de Reconnaissance de la Nation and the Legion d'Honneur. Her husband, Jacques Riols, a Captain in the First French Army was awarded the Croix de Gueurre during the war and, this year, made a chevalier of the Legion d'Honneur.

In May, Mrs. Riols invited me to attend the memorial at Valencay. On a gusty, overcast day, I witnessed a ceremony with bagpipes, poignant speeches and old-world pomp. We sat on wooden chairs, facing the august monument situated in the middle of a roundabout. Mrs. Riols pronounced the names of the deceased; I thought about the journey to heroism that each person had set out on. I wondered, too, about survivor's guilt, disdain, envy—utterly human emotions—surely aroused on such occasions in those who'd lived, whose task had become remembrance, keeping  the flame of collective effort and their colleagues' sacrifices alive, gathering year after year, in the spirit of unity.

I walked up to the roundabout and studied the names of the agents, pinned to poppies and remembrance crosses. I spotted Noor's, different from all the others—her poppy pinned to a crescent moon. I was struck by the atmosphere of vital, active remembrance among the convivial group. Time collapsed, the agents sprung back to life in the stories of derring-do shared by guests, as if they were still happening, or were on the verge of doing so. And there Noor stood: young, Webp.net-resizeimage (4)ambitious, dreams still unfolding, sparkling on the frontier of possibility. Noor had been considered highly dangerous by the Gestapo and was the first female SOE agent to be sent to a German prison. Even as we celebrate our heroes, one person's freedom fighter is another person's terrorist, I thought.

While some maintain Noor was unsuited to the theatre of guerilla war, the author of the tender and prescient Jataka Tales, this tiger-spirited descendant of Tipu Sultan, ruler of Mysore, will always be remembered for her unequivocal commitment to the cause of human liberty, and her chivalry in serving the nation whose bread she ate, loyal to the end to the soil of her residence.

*Humera Afridi is writing a book about the life of World War II heroine Noor Inayat Khan.

Posted by Humera Afridi at 12:15 AM | Permalink | Comments (0)


Monday, November 06, 2017


Do We Value Physical Books More Than Digital Books?

by Jalees Rehman

BookshelfJust a few years ago, the onslaught of digital books seemed unstoppable. Sales of electronic books (E-books) were surging, people were extolling the convenience of carrying around a whole library of thousands of books on a portable digital tablet, phones or E-book readers such as the Amazon Kindle. In addition to portability, E-books allow for highlighting and annotating of key sections, searching for keywords and names of characters, even looking up unknown vocabulary with a single touch. It seemed only like a matter of time until E-books would more or less wholly replace old-fashioned physical books. But recent data seems to challenge this notion. A Pew survey released in 2016 on the reading habits of Americans shows that E-book reading may have reached a plateau in recent years and there is no evidence pointing towards the anticipated extinction of physical books.

The researchers Ozgun Atasoy and Carey Morewedge from Boston University recently conducted a study which suggests that one reason for the stifled E-book market share growth may be that consumers simply value physical goods more than digital goods. In a series of experiments, they tested how much consumers value equivalent physical and digital items such as physical photographs and digital photographs or physical books and digital books. They also asked participants in their studies questions which allowed them to infer some of the psychological motivations that would explain the differences in values.

In one experiment, a research assistant dressed up in a Paul Revere costume asked tourists visiting Old North Church in Boston whether they would like to have their photo taken with the Paul Revere impersonator and keep the photo as a souvenir of the visit. Eighty-six tourists (average age 40 years) volunteered and were informed that they would be asked to donate money to a foundation maintaining the building. The donation could be as low as $0, and the volunteers were randomly assigned to either receiving a physical photo or a digital photo. Participants in both groups received their photo within minutes of the photo being taken, either as an instant-printed photograph or an emailed digital photograph. It turned out that the participants randomly assigned to the digital photo group donated significantly less money than those in the physical photo group (median of $1 in the digital group, $3 in the physical group).

In fact, approximately half the participants in the digital group decided to donate no money. Interestingly, the researchers also asked the participants to estimate the cost of making the photo (such as the costs of the Paul Revere costume and other materials as well as paying the photographer). Both groups estimated the cost around $3 per photo, but despite this estimate, the group receiving digital photos was much less likely to donate money, suggesting that they valued their digital souvenir less.

In a different experiment, the researchers recruited volunteer subjects (100 subjects, mean age 33) online using a web-based survey in which they asked participants how much they would be willing to pay for a physical or digital copy of either a book such as Harry Potter and the Sorcerer's Stone (print-version or the Kindle E-book version) or a movie such as The Dark Knight (DVD or the iTunes digital version). Participants were also asked how much "personal ownership" they would feel for the digital versus the corresponding physical items by completing a questionnaire scored with responses ranging from "strongly agree" to "strongly disagree" to statements such as "feel like it is mine".  In addition to these ownership questions, they also indicated how much they thought they would enjoy the digital and physical versions.

The participants were willing to pay significantly more for the physical book and physical DVD than for the digital counterparts even though they estimated that the enjoyment of either version would be similar. It turned out that participants also felt a significantly stronger sense of personal ownership when it came to the physical items and that the extent of personal ownership correlated nicely with the amount they were willing to pay. 

Pew survey reading

To assess whether a greater sense of personal ownership and control over the physical goods was a central factor in explaining the higher value, the researchers than conducted another experiment in which participants (275 undergraduate students, mean age of 20) were given a hypothetical scenario in which they were asked how much they would be willing to pay for either purchasing or renting textbooks in their digital and print formats. The researchers surmised that if ownership of a physical item was a key factor in explaining the higher value, then there should not be much of a difference between the estimated values of physical and digital textbook rentals. You do not "own" or "control" a book if you are merely renting it because you will have to give it up at the end of the rental period anyway. The data confirmed the hypothesis. For digital textbooks, participants were willing to pay the same price for a rental or a purchase (roughly $45), whereas they would pay nearly twice that for purchasing a physical textbook ($88). Renting a physical textbook was valued at around $59, much closer to the amount the participants would have paid for the digital versions.

This research study raises important new aspects for the digital economy by establishing that consumers likely value physical items higher and by also providing some insights into the underlying psychology. Sure, some of us may like physical books because of the tactile sensation of thumbing through pages or being able to elegantly display are books in a bookshelf. But the question of ownership and control is also an important point. If you purchase an E-book using the Amazon Kindle system, you cannot give it away as a present or sell it once you are done, and the rules for how to lend it to others are dictated by the Kindle platform. Even potential concerns about truly "owning" an E-book are not unfounded as became apparent during the infamous "1984" E-book scandal, when Amazon deleted purchased copies of the book – ironically George Orwell's classic which decries Big Brother controlling information –from the E-book readers of its customers because of some copyright infringement issues. Even though the digital copies of 1984 had been purchased, Amazon still controlled access to the books.

Digital goods have made life more convenient and also bring with them collateral benefits such as environment-friendly reduction in paper consumption. However, some of the issues of control and ownership associated with digital goods need to be addressed to build more trust among consumers to gain more widespread usage.

Reference

Atasoy O and Morewedge CK. (2017). "Digital Goods Are Valued Less Than Physical Goods. Journal of Consumer Research, (in press).

Posted by Jalees Rehman at 12:20 AM | Permalink | Comments (0)

Working On The Blockchain Gang, Part 2

 by Misha Lepetic

"Technology is a way of organizing the universe 
so that people don't have to experience it."
 ~ Max Frisch

Blockchain01Last time I set up a discussion around the premise of BitCoin, or more specifically, one of its underlying technologies, known as the blockchain. In the intervening time, I have been half-heartedly attending numerous events here in New York focusing on blockchain, especially in relation to non-financial implementations. I say half-heartedly, because the purported promise of blockchain has been constantly undermined by the quality of discussion at these events. I'll grant that crypto-currencies in general and the concept of blockchain specifically are initially challenging to grasp, but at the same time I left most of these events wondering if the panelists actually knew what they were talking about, or if what they knew was earth-shaking and recondite to the point that it couldn't responsibly be shared with the public.

Of course, this doesn't prevent people from making all sorts of radical claims befitting a technology that is in its infancy and is speculative at best. Consider the case of Democracy Earth, a startup that I heard present at one of these events. Democracy Earth is seeking to ‘disintermediate politics' through the deployment of blockchain. The closing line of the abstract of Democracy Earth's cri de couer states:

We seek nothing less than true democratic governance for the Internet age, one of the foundational building blocks of an achievable global peace and prosperity arising from an arc of technological innovations that will change what it means to be human on Earth.

Go big or go home, as they say. But what do we really talk about when we talk about blockchain? In this piece I won't address the specifics of the technology (there are any number of half-decent explainers out there for that), but rather the complex series of maneuvers in which we implicitly engage when discussing nascent technologies.

*

A key aspect that makes phenomena such as blockchain so slippery is the sheer difficulty of describing how it works, or rather how it's supposed to work. Actually, there are two discursive moves here that occur simultaneously. The first occurs at the point of the elucidation of the concept itself: the cost of explaining the mechanism or technology in question. But the second move is a concealment of the larger, socio-political context in which that mechanism or technology resides. This concealment simply follows as a result of the attention required to understand the concept in question. 

As an example, consider another innovation that has been much in the news lately: universal basic income. A guaranteed, non-means-tested income is not that tricky to convey; you can just about summarize it by saying, ‘everyone gets a check'. The real cost to our attention springs from the issues that immediately surround it: Is UBI effective? If so, how will we pay for it? To be sure, these are important considerations, and will profoundly characterize any attempt at designing and implementing any UBI scheme. It's not surprising, then, that the discussion tends to end at these questions: it's difficult enough to get consensus on UBI, even when the discussion is bounded within its own terms.

More consequential is the concealment of the larger picture. It's not uncommon for a discussion around UBI to follow these lines: "Let's assume that the government spends X dollars on providing entitlements to its citizens. If we just cut checks adding up to X to everyone, we will give people the money to spend as they wish. Those that need the services will spend their money in such a fashion." However, part of what makes UBI possible is the dismantling of the programs that provide those services. So the true consequence of UBI may be that needy and vulnerable populations no longer have access to services once explicitly designed to serve them, or that the cost of those services is no longer kept in check by government subsidy or regulation. The unconditional, distributed nature of UBI also strongly implies that those who require more care will see their purchasing power diluted: since everyone is getting the same check, if I have the additional burden of needing insulin for my diabetes, I am immediately at a disadvantage compared to someone who doesn't.

Consider another example of what narrowly constructed discussions around UBI do not address, via Brishen Rogers' recent piece in the Boston Review: 

How would a basic income impact workers and firms? It would surely protect workers against the economic harms of unemployment and underemployment by giving them unconditional resources, and it would enable them to bargain for higher wages and to refuse terrible jobs. But a basic income would do little to reduce corporate power, which is a function not just of wealth but of the ability of firms to structure work relationships however they wish when countervailing institutions—such as a powerful regulatory state—are absent or ineffective.

In other words, the gains to an individual's freedom are balanced against the sacrifice of much larger social, economic and political institutions that serve as an interface, interlocutor, arbiter and even bulwark. It reminds me of a comment I heard years ago from the legal scholar Gerald Frug: "There are two kinds of freedom: the freedom to decide what you want, as an individual; and the freedom we have, to decide what kind of society we want to create together." 

Put another way, the distillation of the discussion of UBI to the mechanics of financial distribution misses the fact that money really only facilitates vast networks of social relationships. Failure to see that creates instead an implicit calculus that suggests a little bit of ‘free' money is an adequate substitute for a whole swath of social relationships and processes, but shouldn't you be grateful for free money? Unless we can have a discussion that begins by asking in what way do we want UBI to change society, what we have here is little more than a cheap bribe, although perhaps an exceedingly effective one.

*

The same could be said for blockchain. But before treating blockchain another idea need to be tabled. I'd like to propose a new addition to the venerable taxonomy of "there are two kinds of people in the world": those who are told what to do by software, and those who tell software what to do. This sounds crude, but draws from arguments first made, so far as I can tell, by Peter Reinhardt in 2015, when he noted that: 

The software layer between the company and their armies of contractors eliminates a huge amount of middle management, and creates a worrisome disconnect between jobs that will be automated, and jobs of increasing leverage and value. This software layer generally has three parts: the user interface (UI) for the end customer, a programming interface (API) that actually dispatches a human worker, and a second interface for the worker to execute the task efficiently. The API component is the interesting and slightly disturbing part… What's bizarre here is that these lines of code directly control real humans.

Uber, of course, is the example par excellence of this sort of disintermediation, and another illustration of Rogers' point of ‘the ability of firms to structure work relationships however they wish'. But Reinhardt wants us to understand that once workers fall ‘below the API' there is little opportunity for them to advance, as the API replaces middle management. Furthermore, any task that exists below the API is itself the target of eventual automation. Uber drivers may or may not value their arrangement with the company, but it's no secret that Uber is furiously developing self-driving cars that aim to eventually replace all their human drivers. 

*

As with any technology, the real question to be asked is who benefits. Time and again, the history of technology is more correctly a history of advantages that accrue to those populations already in a position to leverage it. But an equally important trend is one that is diametrically opposed to, and yet co-exists with, this well-understood ‘early adopter' rhetoric. The final, pervasive characterization of a technology is its deployment to better control marginal populations; these are the true advantages of commodification and scale. Those with paltry bank accounts may find themselves with poor credit and little recourse to understanding why, let alone how to fix it. Others, of the wrong race, have been ‘redlined' from owning homes in neighborhoods deemed off-limits by the concerted efforts of both corporations and the government, and administered via the maps that gave the practice its name. And as I pointed out in a February 2016 column here at 3QD, the rapid development of robotic and artificial intelligence technologies is seeing increasing application aimed at marginal populations that require ‘management': the elderly, criminals both convicted or merely accused, and even children. 

In fact, once you take the time to think about it, technology is really a social system, and what we view as the ‘technology' itself is just a proxy. Too often we are distracted by the physical artefact (eg, "How [insert something you can touch] changed society forever") and wind up giving it too much emphasis. As futurist Bruce Sterling puts it, objects are ‘frozen social relationships'. He was speaking from the point of view of design, but much the same could be said for any technology, except that technologies tend to be dynamic, and therefore embodying and shaping processes. And now that technology is abstract (in the form of "code"), or merely pretending to be abstract (in the form of "the cloud," which is just a lot of computers sitting in a spot where you can't see them), it's even more difficult to think of the phenomenon correctly, since we are grasping at how to ‘see' the technology, when what we should be seeing are the relationships it variously creates, cultivates, conceals, devalues and destroys. 

Most importantly, technology is a heuristic that makes people legible to the authors and owners of that technology, and further down the line, to the subscribers, who are paying for the privilege of access. It is about being seen, of being subjected to a gaze (think of watching your Uber driver's car icon gradually make its way to you on the map presented on your smart phone). Of course, this legibility is only as good as the attributes that the system chooses to quantify, and therein lies the rub.

*

How does blockchain propagate the same set of risks? At first glance, it seems like distributed ledger technology struggles against these questions, and is predicated on disintermediation from the very institutions that seek this sort of control over any given group. After all, the founders of Democracy Earth want to rid us of the meddlesome, corrupt and conflicted ‘intermediaries' that infest our politics. (Wait, aren't these other people? Shush, never mind). 

Keep in mind that blockchain, like most of the technologies generated by Silicon Valley, has libertarian roots. Taking the Enlightenment's sovereignty of the individual to its final reductio, we only define freedom as being beholden to no one. This is only one half of Frug's formulation above, whereas what's missing is the second half: we are just as equally beholden to everyone. What does this half-world look like? In The Atlantic, Ian Bogost has written one of the only mainstream media pieces of which I am aware that casts a skeptical eye on distributed ledger technologies, and he lays out the landscape succintly:

The [anarcho-capitalist] worldview only supports sovereign individuals engaging in free-market exchange. Neither states nor corporations are acceptable intermediaries. That leaves a sparsely set table. At it: individuals, the property they own, the contracts into which they enter to exchange that property, and a market to facilitate that exchange. All that's missing is a means to process exchanges in that market.

Even disregarding the way in which this worldview waves off society (while also noting how far this moves beyond even Margaret Thatcher's dictum that "there's no such thing as society. There are individual men and women and there are families"), the anarcho-capitalists also presuppose a minimum viable agency that each individual has. People voluntarily enter into transactions - since that's all there seems to be in this vision - and that's pretty much it. The ‘means to process exchanges' is a nice way of re-stating the stance that no one is to be trusted, and that's precisely what distributed ledger technologies purport to solve. 

Except what if you're not feeling particularly flush with agency? What if you are a member of any number of vulnerable or marginalized populations? You may not have much choice about whether you'd like to enter into a transaction or not. 

*

As of August 2017, there are at least 15 blockchain projects being piloted by various aspects of the United Nations. While I laud the UN for being unusually proactive in adopting potentially innovative approaches, I'll consider one project in particular: the World Food Programme's foray into blockchain for distribution of food aid to Syrian refugees in Jordan. The pilot does everything you'd want it to do: it tracks aid as it makes its way through the supply chain, reducing waste and fraud and dramatically cutting down on payments to intermediaries. This is all very well and good. 

However, things get a bit anxiety-inducing when one considers that, in order to be eligible to receive aid, refugees must submit to an iris scan, the results of which are used to verify identity. More importantly, as each refugee's scan is performed for each occasion, this is data that is added to that particular pilot's blockchain. Additionally, this is a blockchain that has been ‘forked' off of the main platform, known as Ethereum, and is entirely under the UN's control. And the future is designed to scale. Houman Haddad, the executive leading the project for WFP:

…envisions a future where refugees control their own cryptographic keys to access their funds (or "entitlements," in aid worker jargon). This element may be crucial to making aid more easily and widely available because the keys would unlock data that's currently stuck in different aid agencies, including medical records from the World Health Organisation, educational certificates at UNICEF, and nutritional data from WFP.

It's perfectly understandable that the UN would want to go down this path. But at the same time, the refugees in question are now recorded permanently as having fled their country. While there are ample assurances that cryptography will preserve their anonymity, I can assure you that nothing is unbreakable. In fact, it doesn't even need to be unbreakable, merely deductible: blockchain forensics is already a reality. Much as mobile phone towers are used to triangulate locations of specific phones, and browsing habits find users whose identities have been pre-emptively anonymized, these rolls of refugees' names will eventually be at risk of being outed. Once this happens, they may face discrimination at home, or perhaps may never be allowed to return. Make no mistake: being a refugee is a mark. If the outing is done by criminal gangs, they may be victimized or subjected to trafficking, or at least exploited for their fragile identities. And it will all be perfectly recorded.

As Bogost writes, "instead of defanging governments and big corporations, the distributed ledger offers those domains enormous incentive to consolidate their power and influence." Why should the UN be any different? Judging from Haddad's ambitions, it isn't. This is why, instead of attempting to understand how blockchain works, we'd be better off asking ourselves what kind of world we want to live in, before we find ourselves irrevocably quantized in digital amber, below the API.

Posted by Misha Lepetic at 12:05 AM | Permalink | Comments (0)


Monday, October 30, 2017


A book burning in Palo Alto

by Ashutosh Jogalekar

Blank_bookThe flames crackled high and mighty, scalping the leaves from the oak trees, embracing bark and beetles in their maw of carbonized glimmer. The remains of what had been lingered at the bottom, burnt to the sticky nothingness of coagulated black blood. The walls of the stores and restaurants shone brightly, reflecting back the etherized memory of letters and words flung at them. Seen from the branches of the trees, filtered through incandescent fire, the people below were mere dots, ants borne of earthly damnation. A paroxysm of a new beginning silently echoed through the cold air. Palo Alto stood tall and brightly lit tonight.

Bell’s Books, a mainstay of the town for a hundred years, projected its ghostly, flickering shell across the square, its walls stripped of everything that ever dwelt on them, now pale shadows of a dimming past. A few months back they had come to the store, crew cuts and stiff ties, smiles of feigned concerns cutting across the room like benevolent razors. As a seller of used and antiquarian books Bell’s posed a particular problem, riddled through and through as it was with undesirables. The owner, an old woman who looked like she had been there since the beginning of time, was told quietly and with no small degree of sympathy how they did not want to do this but how they needed to cart out most of her inventory, especially because of its historical nature.

“We’re sorry, ma’am, but ever since they passed the addendum our directives have grown more urgent. And please don’t take this personally since yours is not the only collection to be cataloged: over the last few weeks we have repeated this exercise at most of the area’s stores and libraries. To be fair, they are offering healthy compensation for your efforts, and you should be hearing back from the grievances office very soon.”

With that, three Ryder trucks filled with most of the books from Bell’s had disappeared into the waning evening, the old woman standing in the door, the wisps of sadness on her face looking like they wanted to waft into the air and latch on to the gleaming skin of the vehicles. What happened to her since then, where she went and what she did was anybody’s guess. But the space where Bell’s stood had already been sold to an exciting new health food store.

Addendum XIV to the First Amendment had passed three months ago with almost unanimous approval from both parties. In an age of fractured and tribal political loyalties, it had been a refreshingly successful bipartisan effort to reach across the aisle. In some sense it was almost a boring development, since large parts of the First dealing with the right to peaceably assemble had been left unaltered. The few new changes added some exceptions to the hallowed Constitutional touchstone; these included an exception for public decency, another one for offending group sensibilities, and a third one for protection of citizens from provocative or offensive material. That last modification had been solidly backed by data from a battery of distinguished psychologists and sociologists from multiple academic centers, hospitals and government agencies who had demonstrated in double blind studies how any number of literary devices, allusions and historical references produced symptoms especially among the young that were indistinguishable from those of generalized anxiety disorder. Once the Surgeon General had certified the problem as a public health emergency, the road to determined political action had been smoothed over.

Most importantly, Addendum XIV had been a triumph of the people’s will. Painless and quick, it was being held up as an exemplar of representative democracy. The change had been catalyzed by massive public demonstrations of a magnitude that had not been seen since the last war. These demonstrations had begun in the universities as a response against blatant attacks on the dignity of their students, marshaled through the weaponization of words. The fire had then spread far and wide, raging across cities and plains and finally setting the hearts and minds of senators and congressmen ablaze; whether through fear or through common sense was at this point irrelevant. In what was a model example of the social contract between elected public officials and the people, much of the final language in Addendum XIV had been left almost unchanged from drafts that emerged from spirited and productive town hall meetings. It was grassroots government at its best. After years of being seen as almost a pariah, the country could again expect the world to look at it with renewed admiration as a nation of laws and decent people.

The police had put a perimeter around the fire, cordoning it off and trying their best to prevent spectators from approaching too close. But they were having a hard time of it since the whole point of the event was as a community-building exercise where the locals contributed and taught each other. An old cherry picker had been recruited to drop its cargo into the fire from top, but the real action belonged to the people. Children and old alike were cautiously approaching the bright burning flames and tossing in their quota and the younger crowd was flinging everything in quite enthusiastically. Parents who were trying to carefully keep their gleeful children from getting too close were simultaneously balancing the delicate act of teaching their kids how to do their part as civic-minded citizens. A mother was gently helping her four-year-old pick a slim volume and toss it into the gradually growing conflagration while the father stood nearby, smiling and returning the child’s eager glances. It was hard to contain the crowd’s enthusiasm as they obeyed the overt guidelines of the government and the silent dictates of their conscience. The police knew that the people were doing the right thing, so they finally became resigned to occasionally helping out the crowd rather than trying to prevent them from being singed by the heat. An officer took out his pocketknife and knelt to help a man cut the recalcitrant piece of twine that was tying his sheath of tomes together.

Based on the official state and federal guidelines, everyone had filled up their boxes and crates and SUVs and driven here. Driven here from Fremont and Berkeley – hallowed ground of the movement’s sacred origins – and some from as far as Livermore and Fresno, even braving the snaking line of cars on the Dumbarton Bridge to the East. They cursed under their breath for not being allowed to organize similar local events in their own cities, but the government wanted to build community spirit and did not want to dilute the wave of enthusiasm that had swept the nation. Rather than have several small events, they wanted to have a few big ones with memorably big attendance. Palo Alto afforded a somewhat central meeting point as well as a particularly convenient one because of its large repository of used bookstores and university libraries. The Ryder Company had helpfully offered generous discounts for use of their trucks. Stanford and Berkeley had been particularly cooperative and had contributed a large chunk of the evening’s raw material; as torchbearers of the movement, they had had no trouble gathering up enough recruits. Berkeley especially had the White House’s blessing and federal funding had once again started to flow generously to the once cash-strapped institution. Now University Avenue was backed up with Ryder trucks stretching back all the way to Campus Drive, mute messengers of information overflow relived to be offloading their tainted cargo.

As with most events like this, the restaurants were working overtime, offering happy hour deals and competing with each other for the attention of the diverse crowd. The $12.99 double slider special at Sliderbar had been sold out, and Blue Bottle Café in HanaHaus was going crazy trying to cater to their hyper-caffeinated consumers who especially relished the buzz from the establishment’s famous Death Valley Cold Brew. Groups of students could be seen working in relay teams; as one group helped unload the trucks and consign the contents to the flames, the other went back and brought back coffee and donuts for renewed energy. A family stood outside Palo Alto Creamery, the children squealing with delight as their ice cream melted quicker between their fingers in the glare of the heat. The parents watched with familiar exasperation, especially since there were three more bags to take care of. The extra generators at the creamery were having a hard time keeping up, but the huge size of the crowd seemed to please the crews even though they had been working since 3 AM.

To facilitate the transition, the government had mandated paid vacation for one day so that they could deploy agents who would visit homes and take stock of the inventory. Just like they did for jury duty, they sent out letters to everyone confirming the date and time. I had to postpone once since I had still not finished counting up my collection. I wanted to postpone again, but the second letter made the urgency of the matter a bit more clear. Two boyish-looking agents had stopped by and efficiently noted down everything as they gently took volumes out of my shelves and kept them back. Once they were sure about the total they had handed me a piece of paper confirming the number, along with information on the date of the event in Palo Alto. “We appreciate your help in this, Sir; you have no idea how some people have offered resistance to even such a simple call to community service. It’s especially absurd since it was their own friends and family members who had gone out of their way to come to all those town hall meetings and demand this! In any case, we’ll see you on the 27th. You have a nice day now.” I nodded wearily.

I had been reluctant to commit myself to that first milestone. There was another day late in November when those who couldn’t make it for some reason the first time around could go. I decided to go to the October event all the same; I had had nothing much to do in the evenings ever since all the bookstores had been either closed or reduced to selling meager, uninteresting fare.

They were offering discount parking in the lot on Waverley Street, so I parked there and took a right on University Avenue. As I turned a blast of hot air hit me, as if trying to wash away memories of an unwanted past. At the end of the street, flanked by shadows of the moving crowd, was the conflagration. The crowds around me were moving to and fro between the end of the street and the businesses along the sides, although the overall movement seemed to be toward the amorphous, flaring yellow shape shifter in the distance. I suddenly saw a familiar face at the side. It was Sam from HanaHaus; the establishment had opened an extra counter on the sidewalk to quell the crowds inside. “Hey, how’s it going? Some crowd huh?” waved Sam. I waved back but Sam’s hand quickly dissolved in the flurry of hands grabbing coffee cups and placing orders. I kept on walking and quickly reached the police perimeter. “Hi, do you have anything to donate?”, asked an officer. I told him that I was going to take advantage of the extended deadline. That’s ok, he said; based on the conversations he had had, people had such large collections that many of them were going to be forced to come back anyway. As a family with three young kids approached with their bags, he requested me to stand back so he could help them.

As I stepped back I took in the scene. The fire was gleefully lofting paper and pages up in a whirlwind of nihilistic ecstasy, the frayed, burning  edges of pages proclaiming their glowing jaggedness, their silent letter-by-letter obliteration. Nearby, one group of children was dancing in a circle with others, enjoying the momentary dizziness induced by the motion. Their parents were keeping a close watch on them as they went on with their routine. Occasionally a child would quickly run to his or her parents’ side, pick up a volume and toss it laughing and screaming, even as the other children yelled about the interruption. They would then join the circle again and continue the dance, their own movements alternating with the movements of the soot as it went round and round the pyramid of burning paper.

It was then that I saw some of the names; it was odd that I shouldn’t have recognized them before, but it might simply have been because they were so ubiquitous that they had been rendered invisible. There were Lee and Kafka, Baldwin and Joyce, Ovid and Atwood, Plato and Melville, Rushdie and Russell, Twain and Conrad, Rhodes and Faulkner, Pynchon and Sagan, Woolf and Dostoyevsky, McCarthy and Stein. They were there because they were too colonial, too non-colonial, too postcolonial, too offensive, too profane, too sensitive, too traumatic, too objective, too white, too black, too egalitarian, too totalitarian, too maverick, too orthodox, too self-reflective, too existential, too modern, too postmodern, too violent, too bucolic, too crude, too literary, too cis, too trans, too religious, too secular, too nihilistic, too meaningful, too anarchist, too conformist, too feminist, too masculine, too languid, too unsettling, too horrific, too boring, too much ahead of their times, too much relics of the past, too much, simply. They were there because sensibilities had been offended, because words had been weaponized.

Most of them were lined up in bag after bag next to the fire, gagged and bound, silently screaming against the passions of men. The ones that had already made it into the void were gone, ideas becoming null, breath turned into air, but some had stumbled back from the high pile with various parts charred and curled up in half- dead configurations, painfully trying to remain part of this world. Some of the names were partially gone, formless echoes being slowly stuffed back into the grave. The ones which had photos of their authors had these photos metamorphosed into things begging to be obliterated: a woman with only her smile burnt off, looking like a gargoyle without a mouth; a man with his eyes masterfully taken out by well-placed glowing embers; another one where the heat had half-heartedly engraved dimpled plastic bubbles on the face of a female novelist known to have a pleasing countenance, now looking like a smallpox victim with a jaw left hanging.

It was then that I noticed another breed of spectator rapidly moving through the crowd. Photographers hired by both government and private agencies were canvassing the scene like bounty hunters looking for trinkets of a fractured reality which they could take back to their studios and immortalize in its isolated desolation. One of them was the noted photographer Brandon Trammel, from the California Inquirer. I could see him now on the other side of the fire, his body and the shimmering flames appearing to coalesce into one seamless disintegration. At a certain temperature human beings and paper become indistinguishable, guilt-ridden souls shredded apart into their constituent atoms, sons and daughters of the whims of men consumed by fire and fury. Trammel was taking photos of the men and women and children around the fire, etching their cries of glee and solemn duty into permanent oblivion.

Moving around the fire like a possessed man furiously scribbling down the habits of an alien civilization, he came over to my side. I caught sight of a half-burnt title on the ground. It was a familiar volume from another era, an era now looking like the world in a snow globe, eroding now through the obscuring glow of time. “Hey”, I yelled at him, “Here, let me pose for you”. I picked up the book and threw it at the red wall with all my might. I heard a click, but at the last moment the charred remains of its edges had disintegrated in my hands and it fell short by a few feet. I desperately looked around. Another one was within sight. I hastily scampered over, picked it up and looked at Trammel, eager and wild-eyed. “Again!”, I screamed at the top of my voice, and cast it into the fire.

Posted by Ashutosh Jogalekar at 12:30 AM | Permalink | Comments (0)


Monday, October 23, 2017


Trying to understand random violence

by Emrys Westacott

ImagesA man goes to the doctor because he is worried about a possibly malignant tumor on his neck. Two weeks later he goes back, concerned about another growth on his spine. Two more weeks and he again goes to the doctor to ask about a lesion in his mouth. Each time the doctor examines him carefully, conducts tests, and consults with colleagues. But each time, the physicians concern themselves mainly with the question of why the lesions appear where they do. Why has the tumor appeared on the neck rather than on the liver? Why on the spine and not in the brain? The patient can't help feeling that they are neglecting the more important question: why are tumors appearing in the first place?

Listening to some of the news coverage following the mass shooting in Las Vegas on October 1st, when Stephen Paddock opened fire from a hotel window on the audience at an outdoor concert, killing 58 and injuring over 500, I felt rather like this patient. Reports on NPR would typically begin: "Police still don't know why Stephen Paddock opened fire on…….." Of course, it is legitimate and important to ask why this particular individual suddenly committed mass murder, just as it's worth asking why tumors appear where they do. Establishing correlations between acts of random violence and elements in the perpetrator's life story, situation, or psychological profile could possibly help us anticipate and thereby forestall future tragedies. But we also need to ask the more fundamental question. Why are lesions appearing on the body? Why are spree killings much more common in the US than in other countries?

First, it is worth establishing a few facts. According to a CNN report, there were 90 mass shootings in the US between 1966 and 2012. These are shootings that kill more than four people but don't include gang violence or incidents involving several family members. They include such spree killings as those at the Orlando night club (June 2016, 49 killed), Sandy Hook (Dec. 2014, 27 killed), and Virginia Tech (April 2007, 32 killed). In the rest of the world during this period, there were 292 incidents of this sort. And compared to other economically developed countries, the US is a total outlier. So although violent crime has declined significantly in the US over the last 20 years, the question still remains: why are there are so many more spree shootings in the US than in other countries?

There is no single explanation. And there is no simple explanation. In discussions of this question people are often tempted to identify one factor as the cause. But that is a mistake. The question has to do with probabilities. Why is a spree shooting so many times more likely to occur in the US than in Canada, or Germany, or Japan? The answer, in my view, lies in a confluence of many factors. Taken individually, few are unique to the US, and none of them really explain the phenomenon in question; but taken together, they perhaps make it more comprehensible.

One could write a book on each of these factors. In fact, books have been written on each. Obviously, they aren't all of equal weight. But together they add up to a state of affairs where spree killings become more likely. Here are some of the most important.

·      A history of violence that is not state-sanctioned. Few other modernized countries have anything like the wild west in their recent history on their own soil.

·      The celebration of violence. This is especially evident in video games, in TV shows, and films. Blockbuster movies from early John Wayne to today's Superheroes typically involve some individual or group solving a problem by violence. In many cases the film is little more than a delivery system for a sustained shoot out.

·      Militarism. The US armed forces are more in the news, are closer to politicians, and are more venerated than in other democratic countries.

·      Individualism. American culture loves to celebrate the myth of the tough, determined individual who singlehandedly achieves his goals (and yes–it's usually a man), whether it be Clint Eastwood out for revenge, or Mark Zuckerberg creating Facebook.

·      The cult of the thrill. People achieving tranquil contentment doesn't make good copy or good entertainment. So we regularly hear people describe some experience, often one accompanied by great risk, such as an extreme sport, or participating in a military action, as a moment when they felt really, truly alive. For the sake of such moments, we learn, they are willing to risk or even sacrifice everything.

·      A highly competitive culture. Capitalism is based on competition, and as Marx pointed out, the character of the economic system percolates throughout a society's culture. Donald Trump gives voice to the basic assumption underlying the accompanying ideology: there are winners, and there are losers–and if you're not one, then you're the other.

·      The cult of celebrity. The two main kinds of winning are fame and fortune. For someone who can't see his way to either but who has absorbed the values of the celebrity culture, a desperate substitute for fame is infamy. Mass media and social media have perhaps inflamed the desire for fame–which is itself a hypertrophied form of the more or less universal desire for respect.

·      The ideology of meritocracy. This is the view that each individual pretty much deserves to be where they are in the socio-economic hierarchy. If they're at the top it's because they are smart and worked hard. If they're lower down, that must be because they are either lazy, dumb, or both. This myth pervades American culture. Consequently, those occupying the lower rungs inevitably feel looked down upon, which fosters resentment. And if they have internalized the ideology, their self-respect will be threatened.

·      Inequality. Many studies have documented the growing economic inequality in the US over the past 30 years, with the top 1% pocketing an increasing share of the country's wealth. One can reasonably expect this to breed resentment, and not just against the "haves," but also against the system that so clearly favours the rich.

·      Feelings of powerlessness. The basic value underlying a democratic political system is that of autonomy. A country is more democratic the more the people genuinely exercise self-determination. By this standard, American democracy is decidedly unhealthy. Money dominates politics; lobbyists exercise huge influence; districts are shamelessly gerrymandered; candidates and parties can win the most votes and still not gain power; politicians in Washington are far more concerned with being re-elected than with doing what is right or representing their constituents. As a result, millions feel helpless.

·      Loneliness. Robert Purtnam's Bowling Alone is perhaps the best-known study of the breakdown of community bonds in many parts of America. Spree killers are often isolated figures, with few friends or social commitments.

·      A poor health care system. Many–some would say all–spree killers are mentally ill. It is reasonably to suppose that if everyone suffering from mental illness could rely on being treated promptly and free of charge, more of those who need treatment would get it.

·      Changing sex roles. Virtually all mass shooters are men. One factor feeding the confusion, bitterness and resentment that sometimes expresses itself in acts of violence may well be the declining authority and status of men in relation to women that has taken place over the past century, both in the family and in the public sphere.

·      The death of God. In general, the US is more religious than other modernized countries. For all that, church going has declined in America too, and for an increasing number of people religion is less central to their lives and to their view of life. The point here isn't that non-believers are more likely to be killers because they aren't afraid of hell–although that may possibly be true. The deeper point is that in the largely secular, materialistic, hedonistic culture that has arisen, many people feel the lack of any profound purpose or meaning to life. Some are comfortable with this. Some feel it as vague sort of discomfort preventing them from achieving contentment. And in some, that  vague discontentment can become something more desperate.

·      The fetishism of guns.            It's a fine thing to be a genuine collector, whether of stamps, fossils, or historically interesting weaponry. But the people who assemble frightening arsenals, who believe that the second amendment protects the right to own virtually any kind of weapon (so much for original intent!), who expect, absurdly, that government agents will soon be knocking on the door to take away their hunting rifles, and who fantasize about the delicious moment when they will actually put their guns to use in protecting their property and their family, are not really collectors. They are gun fetishists. And through the NRA, and the politicians who kowtow to the NRA, they obstruct the passage of sensible gun control laws.

·      The lack of sensible gun control. Last but not least. The gun fetishists are perfectly right when they point out that some particular proposed measure (e.g. tighter background checks) would not have prevented some particular killing. But that is no reason not to do something about the number and type of guns in circulation, the capacity of magazines, the ease with which even disturbed individuals can acquire weapons, and so on. To say it again, we are dealing with probabilities. Eliminating gun violence in any foreseeable future is not feasible. But reducing its likelihood is.

These are some of the factors that in my view help explain why random shootings are more common in the US than in other countries. Readers can no doubt think of others. Many of these factors overlap and are interlinked.

Critics may say about any of them: You're telling me that there are more mass shootings in America because of factor X? Then how do you account for the fact that factor X is just as true of country Y where they don't have this problem? But this criticism misrepresents my argument. No one factor by itself constitutes an explanation, and taken in isolation they may even seem to have little to do with spree killings. But they are like the ingredients in gunpowder. By themselves they are not volatile, but when combined they become potentially explosive.

Posted by Emrys Westacott at 12:20 AM | Permalink | Comments (0)


Monday, October 09, 2017


The Far Right Movement in Germany and the Burden of History

by Jalees Rehman

FrauenkircheA friend who was invited to serve as a visiting professor at a German university recently contacted me and asked whether staying in Germany would be safe for him and his family. His concern was prompted by the September 2017 election of the federal German parliament in which the far-right AfD (Alternative für Deutschland, translated as "Alternative for Germany") party received approximately 13% of the popular vote. AfD had campaigned on an anti-immigrant and anti-Muslim platform, and has been referred to by various media outlets as a nationalist, racist, far-right populist, right wing extremist or even Neo-Nazi party. For the first time in history since World War 2, a far-right or nationalist party would be sitting in the federal German parliament by crossing the 5% minimum threshold designed to keep out fringe political movements. Even though all other political parties had categorically ruled out forming a government coalition with the AfD, thus relegating it to an opposition role in parliament with only a limited role in policy-making, my friend was concerned that its success could be indicative of rising neo-Nazism and hatred towards immigrants or Muslims. As a Muslim and visibly South Asian, he and his family could be prime targets for right-wing hatred.

I was flabbergasted by his concern. What surprised me most was that someone living in the US would be worried about safety and racial prejudice in Germany. Violent crime rates in major German cities are much lower than those of their US counterparts. While it is true that AfD garnered 13% of the popular vote in Germany, the US president who also ran on a similar populist, nationalist and anti-immigrant platform (with promises of building walls and enacting Muslim bans) received 46% of the popular vote! Many of the views of the AfD - for example the claims that traditional Islam is not compatible with Western European culture and the constitution, that immigrants and refugees represent a major threat to the economy and safety or that multiculturalism and progressive-liberal views have betrayed the ideals of the country's heritage – are increasingly becoming mainstream views of the ruling Republican party in the US. White supremacists, supporters of confederate ideology and neo-Nazis now feel emboldened to hold rallies in the US, knowing that they might only receive lukewarm or relativistic criticism from the US government whereas such acts would be unequivocally condemned by the German government. Racial or religious prejudices held by members of the government and the ruling party can lead to severe institutional reprisals against individuals. When these views are held by a minority party, there is much less danger of immediate institutionalized discrimination and persecution by the government or law enforcement.

So why is it that the 13% vote for AfD is causing such concern, both in Germany and outside of Germany?

One of the obvious reasons is Germany's history. If the AfD emergence were to foreshadow a re-awakening of Nazi ideology, then it could indeed have devastating consequences for Germany and the world in general. But there is no real evidence to suggest that Nazi ideology is espoused by the AfD leadership or by its base. Terms such as neo-Nazism and fascism are readily used by opponents of the AfD to describe the party but the AfD tries to clearly distance itself from Nazism. The AfD does not accept membership applications from former members of the NPD – a right wing extremist fringe party in Germany with an ideology that was far closer to that of the Nazis. The AfD not only disavows anti-Semitism, it has successfully recruited many Jewish members and offered them leadership roles in the party by portraying itself as a bulwark that will protect German Jews from Muslim anti-Semitism. These approaches effectively counter accusations of Nazism but they have not convinced all. The president of the German Council of Jews, Josef Schuster, recognizes that there is a growing problem with anti-Semitism perpetrated by Muslims in Germany but is not ready to accept the AfD as an ally. It may be advantageous to scape-goat Muslims in the current political climate but who is to say that the AfD won't switch its scape-goat to Jews in the future if the latter were politically more expedient?

Part of the confusion about what the AfD really stands for is that it has rapidly evolved over the course of just a few years. It started out in 2013 as a party founded by economics professors, who were opposed to Angela Merkel's handling of the euro crisis and the loss of Germany's fiscal sovereignty in the European Union. But once it became apparent that feared massive economic crash and recession had been averted (at least transiently), it morphed into an anti-Islam and anti-immigrant party. This modified AfD ousted its co-founder, the economics professor Bernd Lucke, from his leadership role in 2015. The party gained far more traction with its anti-Islam and anti-immigrant views after Merkel's government allowed more than 1 million refugees (predominantly from Syria but also from other countries in the Middle East) to enter Germany.

During this evolution, the AfD also become increasingly populist. Jan-Werner Müller, a German political scientist and professor at Princeton University, defined the key characteristics of populism in his recent book Was ist Populismus? ("What is Populism?"). Populist movements portray themselves as anti-establishment or anti-elite, but a second key element of a populist movement is their attitude towards pluralism. Müller uses the phrase "Wir sind das Volk" ("We are the people") that was chanted by the East German demonstrators during the final months of the DDR in 1989 to illustrate anti-pluralism. The "We" can be an inclusive "We" in the sense of "We are the people, too. Let us have a say!" This may be an apt description of the DDR demonstrators where several political factions demonstrated side-by-side in opposition against the socialist dictatorship. However, in populist movements, the "We" is exclusive: "Only we represent the people!" Those who do not agree are seen as traitors. In the past 2 years, the AfD leaders and base increasingly began to claim this exclusivity. Merkel was accused of betraying Germany and colluding with leftists, environmentalists and Muslim to betray the true values of the German people. Such anti-pluralism is antithetical to democracy and is thus a major cause of concern for democratic parties and institutions in Germany. The sense of exclusivity allows populists to develop a unique zeal and promote conspiracy theories about the political establishment and media, brandishing rational criticisms as pro-establishment collusions.

AfD is not just an anti-immigrant populist party, it also embodies a broader "Neue Rechte" ("New Right") movement. This is supported by the fact that some of the AfD positions have garnered the "philosophical blessing" of German intellectuals, an expression used by Müller in his excellent 2016 essay about the AfD. Müller cites the intellectuals Marc Jongen, Peter Sloterdijk and Botho Strauβ but this list now needs to be extended to include the prominent history professor Rolf Peter Sieferle who committed suicide in September of 2016 (one year before the election). His posthumously published and scandal-provoking book Finis Germania (alluding to the Latin phrase Finis Germaniae which means "The End of Germany"), became a best-seller in the months leading up to the 2017 election.

Sieferle was a respected professor of history and sociology, and thought of as a pioneer in studying environmental history. Finis Germania appears to have been written in the mid-1990s because it refers to the atrocities of the Nazis as having occurred 50 years prior. It is a short collection of mini-essays and aphorisms, grouped together in a handful of chapters. The tone is pessimistic and cynical, pointing towards a decline and likely collapse of German heritage and Germany. The most controversial passages revolve around Vergangenheitsbewältigung, a German word for processing and overcoming history. In Germany, Vergangenheitsbewältigung primarily refers to how Germany deals with its Nazi past. The Holocaust, the guilt of the Germans who participated in committing the atrocities and the historical responsibility (historische Verantwortung) that resulted from it for modern Germany are among the most extensively discussed topics in German school curricula and public intellectual discourse.

There is no denial of the Holocaust in the book. Sieferle uses the expressions "Verbrechen" (crime) and "Greueltaten" (atrocities) to describe the genocide committed by the Nazis, as was recently emphasized by Christopher Caldwell. However, Sieferle openly criticizes the style of contemporary Vergangenheitsbewältigung in which Germans are cast as perennial villains who need to demonstrate never-ending penance to atone for their collective guilt. Sieferle uses religious metaphors in which the Holocaust is compared to a new form of Erbsünde (literally translated as "inherited sin", but it is a German expression for the biblical original sin of Adam and Eve). Vergangenheitsbewältigung is likened to a new state religion which is meant to keep Germans docile.

There is no doubt that Sieferle's book touched a raw nerve with many Germans living today who feel that they are still held responsible for crimes committed by the Nazis. Any expression of German pride or patriotism is often self-scrutinized carefully to ensure that it in no way challenges Vergangenheitsbewältigung. Especially when interacting with non-Germans, Germans may consciously or subconsciously perceive themselves being pigeon-holed as descendants of Nazi perpetrators. They go out of their way to prove that they are different from their parents or grand-parents who may have lived during their Nazi era. Sieferle specifically contrasts Germans with Anglo-Americans who do not engage in self-flagellating Vergangenheitsbewältigung. A recent poll showed that 43% of British citizens are proud of their colonial past and do not feel shame for the atrocities of the British Empire. One example of British atrocities is the diversion of food from India in 1943 to feed British soldiers that was authorized by Winston Churchill and resulted in a famine which killed 4 million Indians.

A member of a book jury initially recommended this book because it would initiate a discussion about German history and the book quickly became a non-fiction best-seller. While there is no explicit Holocaust denial in the book, the subtext of the book was seen as dallying with anti-Semitism. Modern day anti-Semites cannot deny the Holocaust because the evidence for the atrocities is so overwhelming but they instead try to cast Jews as post-war perpetrators who use the memory Holocaust as a means of suppressing dissent. Some passages of Finis Germania are ambiguous enough to provide fodder for anti-Semites. The massive popularity of a book that could potentially promote anti-Semitic ideas came as a shock to the German literary and intellectual establishments. But the rash reaction of the leading German magazine Der Spiegel to delete the book from its best-seller listturned a marginally intelligible book with fragmented ideas into a heroic anti-establishment tract. Book-shops refused to sell the book but it remained an Amazon best-seller, suggesting that the ban had not diminished its popularity. While some German writers and intellectuals supported the decision of Der Spiegel, others saw it as a form of censorship to suppress undesirable ideas.

How does this book about the German history connect to the success of the AfD and the New Right movement? A second posthumously published Sieferle book also became a best-seller: Das Migrationsproblem ("The migration problem"). This book discusses the basic challenge for a welfare state such as Germany which aims to provide excellent housing, healthcare and food for all to take large numbers of refugees or immigrants who would be eligible for all the welfare services. The stability of the welfare state depends on a balance of workers who pay into the system and the beneficiaries. It performs a semi-quantitative analysis and suggests that Germany cannot handle the influx of political and economic refugees without compromising its welfare state character. The book also touches on the cultural differences between indigenous Germans and "tribal" refugees who hail from aggressive cultures. This second book also became a best-seller but it is the combination of the two themes that may form the intellectual foundation for the success of the AfD. Finis Germania decries the culture of collective guilt which in turn has lead Germans to be so docile that they accept millions of refugees as their inherited burden even if it undermines their economy and culture.

The AfD has tried to avoid public discussions of Vergangenheitsbewältigung in order to escape accusations of anti-Semitism and instead focused on Islam, immigrants or refugees. However, in a widely criticized speech, Björn Höcke – the leader of AfD in Thuringia – referred to the Berlin Holocaust memorial as a "monument of shame" in January of 2017. He suggested that German history was crippling contemporary Germans and there was a need to re-think how Germans should handle their past. The federal AfD leadership was taken aback by these overt and public comments about a taboo topic and initiated a process to remove him for the party. However, Höcke remains an AfD member and has received support from many other AfD leaders. The success of the AfD may suggest that his speech may have been an intentional ploy to link German frustration with collective guilt to voting for AfD as a means to escape from the burden of the past.

How should Germany move forward after the success of AfD? As a Muslim German of South Asian descent, I am of course worried about the racist, anti-immigrant, anti-Muslim and populist rhetoric promoted by the AfD. There is no easy solution for how to deal with the rise of the far right but we can glean insights from this election and the success of far right movements in the United States or other countries. Censoring or banning books that simply express unpleasant view-points is the wrong approach. Denouncing 13% of German voters as Nazis, fascists or "deplorables" would be equally wrong. Burying our heads in the sand and hoping that right-wing populism will just disappear would be a folly. There is a sense of panic about the results of the German election but we can also see it as a wake-up call. Many countries have seen a rise in right-wing populist movements but the social and historical context of each movement is different and needs to be analyzed contextually. What is needed now is a rational analysis and the required actions.

Those of us who believe in the German democratic institutions and the power of rational dialogue need to engage the citizens who voted for AfD. One may agree or disagree with the positions of the AfD and its voters but this should not prevent meaningful dialogue. Concerns about the future of a welfare state with an imbalance between payers and beneficiaries are not unreasonable. The concerns revolving around immigration, refugees, the right to experience national pride and Vergangenheitsbewältigung should be addressed without condescension or throwing around insults and clichés. Another major concern voiced by AfD supporters is that key decisions about the future of Germany are made unilaterally by the government elites without engaging in a meaningful discussion with the electorate. Voters felt disempowered and ignored. This may also explain why the AfD received more than 20% of the vote in some parts of East Germany (the former DDR). Former DDR citizens wrested their freedom to vote and participate in public policy-making from a dictatorship less than 30 years only to find that the post-DDR Germany was also ignoring their opinions. The government and members of parliament have to learn how to routinely meet citizens so that they can listen to their concerns.

Condescension and hatred against the supporters of far right populist movements only strengthens them and their resolve to fight democratic pluralism. By peacefully and rationally engaging fellow citizens, Germany will be able to avoid the fate of the United States where a far right movement now controls the government. The historical responsibility of Germany lies in providing balance and reason in a world that could succumb to populism and chaos.

References:

Jan-Werner Müller. Was ist Populismus?. Suhrkamp Verlag, 2016.

Rolf Peter Sieferle. Finis Germania. Verlag Antaios. 2017

Rolf Peter Sieferle. Das Migrationsproblem. Manuscriptum Verlagsbuchhandlung. 2017 

Posted by Jalees Rehman at 12:55 AM | Permalink | Comments (0)


Monday, September 25, 2017


The US and North Korea: Posturing v pragmatism

by Emrys Westacott

On September 19, Donald Trump spoke before the UN general assembly. Addressing the issue of North Korea's nuclear weapons program, he said that the US "if it is forced to defend itself or its allies, . . . will have no choice but to totally destroy North Korea." And of the North Korean leader Kim Jong-un, he said, "Rocket Man is on a suicide mission for himself and his regime." Download

There is nothing new about the US president affirming a commitment to defend itself and its allies. What is noteworthy about Trump's remarks is his cavalier talk of totally destroying another country, which implicitly suggests the use of nuclear weapons, and his deliberately insulting–as opposed to just criticizing–Kim Jong-un. He seems to enjoy getting down in the gutter with the North Korean leader, who responded in kind by calling Trump a "frightened dog," and a "mentally deranged dotard." Critics have noted that Trump's language is closer to what one expects of a strutting schoolyard bully than a national leader addressing an august assembly. And one could ask interesting questions about the psychological make-up of both men that leads them to speak the way they do. From a moral and political point of view, though, the only really important question regarding Trump's behavior is whether or not it is sensible. Is it a good idea to threaten and insult Kim Jong-un.

As a general rule, the best way to evaluate any action, including a speech act, is pragmatically: that is, by its likely effects. This is not always easy. Our predictions about the effects of an action are rarely certain, and they are often wrong. Moreover, even if we agree that one should think pragmatically, most of us find it hard to stick to this resolve. How many parents have nagged their teenage kids even though they know that such nagging will probably be counterproductive? How many of us have gone ahead and made an unnecessary critical comment to a partner that we know is likely to spark an unpleasant and unproductive row? And if one happens to be an ignorant, impulsive, narcissist, the self-restraint required in acting pragmatically is probably out of reach. Which is worrying when one considers how high the stakes are in the verbal cock fight between Trump and Jong-un.

There can be various motives behind issuing a threat. You could be signaling something to a third party (e.g. that you are a dangerous enemy, or a loyal friend). Or you could be looking to bolster your own confidence. But insofar as you are thinking about the party you are threatening, a threat is usually intended to have one of two consequences.

            1. Cause a conflict through provocation.

            2. Forestall a conflict by instilling fear.

Every reasonable person agrees that a war between the US and North Korea would be catastrophic. It could very easily and quickly lead to millions of deaths in both North and South Korea. Even if one is callous enough to discount the consequences to North Koreans, the proximity of Seoul to the border, with a population in the greater metropolitan area of 24 million, means that North Korea could almost certainly wreak havoc even if it only used conventional weapons.

So a threat that risks provoking a conflict is horribly irresponsible. Threatening Kim Jong-un is thus only sensible if it makes war less likely by instilling fear. Is it likely to do this? We don't know. To know how the recipient of a threat will react, you have to understand that person's mindset. But Kim Jong-un's mind is fairly opaque, at least to most Western observers.

In the absence of good information about what Kim Jong-un is really thinking, one naturally falls back on general principles that supposedly describe human nature, and therefore apply to him just as they apply to us all. Most people, it is said, are rational and self-interested; therefore, they won't normally act in ways that is wildly opposed to their own self-interest. And the same can reasonably be assumed of Kim Jong-un. But can it? The problem here is that the inference just drawn is invalid. From the fact that people are rational and self-interested, all that follows is that they won't usually act in ways that they perceive to be against their own interests. If their perceptions are mistaken, they may very well accidentally screw themselves over. So to predict how Jong-un will react to Trump's threats, we need to know not what will actually benefit him, but what he thinks will. Which brings us back to the opacity problem.

Dictator's minds are often hard to fathom because their view of the world is likely to be somewhat distorted. There are several reasons for this.

·        They tend to be arrogant and hubristic, with an exaggerated sense of their own ability and power. (Think of Hitler invading the Soviet Union in June 1941 and six months later declaring war on the US.)

·        They are often suspicious going on paranoid, so they don't trust good information. (Think of Stalin in 1941 ignoring all those who told him that Hitler was about to attack.)

·        They are typically surrounded by sycophants who tell them only what it is assumed they want to hear.

These factors also explain why dictators often fail so spectacularly to do what would best serve their own interests. It's common for them to be portrayed, even by people who loathe them, as fiendishly shrewd and cunning. But this shrewdness is often quite narrow in scope. Plato got this right over two thousand years ago in the Republic. Tyrants, he argued, are pitiable rather than enviable. They think they know what is in their interest, but they don't.

Take Saddam Hussein, for example. He could have forestalled the US invasion of Iraq in 2003. But believing, in spite of overwhelming evidence to the contrary, that there would be no invasion, he chose not to quash once and for all reports that he had weapons of mass destruction. By the end of the year, his sons were dead and he was in prison awaiting trial and eventual execution.

Or consider the Syrian dicatator Bashar al-Assad. Between 2000, when he assumed power, and 2010, he presided over a relatively stable country. But his brutal treatment of dissidents and his intransigent resistance to reform eventually led to a civil war which since 2011 has killed over 400,000 people, rendered millions homeless, and caused over five million people to flee the country, much of which now lies in ruins. Assad may be indifferent to the fate of ordinary Syrians, but his life and reputation would surely be much better had he pursued policies that didn't lead to this national disaster.

So one can't be certain that Kim Jong-un will only ever do what is (as opposed to what he believes to be) in his rational self-interest. For that reason, Trumps childlike posturing and sabre rattling are fantastically irresponsible. The uncontroversial bedrock principle that should underlie and inform US policy is that war would be catastrophic. Just about anything would be preferable. With respect to US foreign policy, therefore, war would represent an absolute failure. But it is not clear that Trump grasps this. A recent tweet read: "Just heard Foreign Minister of North Korea speak at U.N. If he echoes thoughts of Little Rocket Man, they won't be around much longer! The casualness of this reference to people not being around–that is, to them and countless others being wiped out through military action–is numbing.

On another occasion, Trump declared that when it comes to North Korea, "talking is not the answer." That is as wrong as it gets. The only acceptable long term future is one in which the parties involved sit down and negotiate. Instead of offering deliberately provocative threats and insults, the US should do the following:

·      State clearly that while they will always defend themselves and their allies, the US will not initiate any military action against North Korea.

·      Express a desire for an improved relationship between the US and North Korea, with the long-term goal of full diplomatic relations.

·      Propose, and work hard to achieve, both one-on-one talks between the US and North Korea, and multilateral talks involving South Korea, China, Japan and Russia to reduce military tension on the Korean peninsular.

·      Propose similar talks to explore the development of economic activity linking North Korea and other countries.

Some people will object that these proposals amount to rewarding North Korea's "bad" behavior. They argue instead for economic sanctions rather than talks, preferring sticks to carrots. It is possible they are right. Yet in the history of international relations, economic sanctions have rarely achieved their purpose. More importantly, in this case it is quite possible that tighter economic sanctions could make things worse–that is, make war more likely if they make the North Korean government feel weaker and more desperate. What matters isn't what someone "deserves," but what works. To say it again, when the stakes are so high, pragmatism should be the order of the day.

Posted by Emrys Westacott at 12:10 AM | Permalink | Comments (0)


Monday, September 18, 2017


Donald Trump is no Leroy Jethro Gibbs

by Bill Benzon

37093809932_f398eef416Donald Trump, of course, is the forty-fifth President of the United States. He is a real person, but Leroy Jethro Gibbs is not. He is the central character in NCIS, one of the most popular and longest running shows on network television. Gibbs is a Senior Special Agent in the Naval Criminal Investigative Service.

The Trump campaign is known to have targeted NCIS viewers. Why? What appeal would a show like NCIS have for Trump voters?

The honor to serve

Let’s look at a scene from an episode in season seven, which started airing in 2009. The episode is called “One Shot, One Kill”. It opens in a video game arcade where some teen-aged boys are blown away by the skill of Marine Corps sergeant. We cut to a recruiting office where the sergeant is giving the boys the hard sell about hitching up. He’s talking about Iraq: “Been in the corps 16 years. Closest I’ve ever come to a bullet is...” Shatter! Wham! Splatt! He’s shot. Slumps over on the desk.

Gibbs and his team are called in to investigate. In the course of their investigation the recruiter who replaces the first one is also shot. In both cases, sitting at the desk, shot from long distance, through the window. Sniper.

Gibbs decides he’s got to go under cover. He’ll pretend to be a Marine recruiter, which will be easy form him as he had once been a Marine. To protect Gibbs bullet-proof glass is placed in the window and he wears a bullet-proof vest. Three microphones are placed outside so that, when the shooter fires, their Forensic Specialist, Abby Sciuto, can pick up the sound and use it to triangulate the shooter’s location. We then scoot over there and make the arrest.

We’re in the recruiting office, Gibbs looking sharp in his old Marine uniform. One of his Senior Agents, Kate Todd, is in uniform as a captain. She’s there to profile potential recruits as they visit they office. The major who heads the recruiting unit wants to stay; after all, he’s lost two men to this sniper. Gibbs objects. The major insists.

The matter is resolved (c. 34:39):

Gibbs: Major, your mission is to protect our country. Our mission right now is to protect you, and your marines. Allow us the honor of doing our job.
Major: Good luck Gunnery Sergeant.

That phrase – “the honor of doing our job” – may be the ideological and ethical heart of NCIS. I can see why it resonates with Trump voters. It resonates with me, and I voted for Hillary.

Gibbs knows that, in undertaking this assignment, he’s putting himself in harm’s way. He’s done that before, when he was a Marine sniper. He’s got his values. He serves a cause larger than himself. He serves his country, without hesitation.

NCIS is a show where values are clear and duty calls. Oh, there’s muck and murk along the way, but the fundamental moral structure of the universe is clear. It’s a show were the workplace is “like family.” People are loyal to and trust one another. Sure, there’s conflict and tension, plenty of bickering, but we’re all in this together. For country and family.

Everything in its place

We can see a bit of that family tension in a scene from the first episode of the second season, “The Good Wives Club”. Gibbs is onsite with his team, consisting of Senior Special Agent Tony DiNozzo, Special Agent Kate Todd, and Tim McGee, a probationary Special Agent who has just joined the team. Gibbs has just introduced the team to Lt. Commander Willis, head of security at a base where a woman’s dead body has been found. As Gibbs continues to talk with Willis we hear this conversation (1:36):

DiNozzo (talking to Todd): When Gibbs introduced us he introduced you, then McGee, then me. Why’d he mention me last.
Todd: You are kidding.
DiNozzo: No. For Gibbs to mix up the seniority order like that, just seems weird that’s all.
McGee: I don’t think it really means...
Dinozzo: Probie...
Todd: I wouldn’t put too much stock in it.
DiNozzo: Why do you say that?
Todd: Well, because I don’t think it has anything to do with seniority.
DiNozzo: What do you think it has to do with?
Todd: My guess would be level of intelligence and general competence. DiNozzo holds up his hand and turns to McGee.
McGee just barely begins to speak: I didn’t say anything.
DiNozzo: It’s what yer thinking, Probie.

DiNozzo’s insecurity is a real but also a minor issue. Todd’s witty reply is a bit insulting, as it’s meant to be. This is playful banter. Their willingness to play around like this is evidence that they trust and respect one another.

The world of NCIS is one where seniority is important. It’s our world. And it’s a military world, though NCIS agents are civilians the crimes involve the military. The fact that the show would devote time to a scene like this indicates the importance of order to the show: a place for everything, and everything in its place.

After that bit of conversation we shift to the crime scene and the show goes about its business. The crime is solved (the criminal, in this case, commits suicide) and the episode ends with this bit of banter (41:53):

Gibbs: DiNozzo, Kate, McGee. M-TAC now!
DiNozzo: DiNozzo, Kate, McGee. DiNozzo, Kate, McGee! [In order by seniority.]
Todd: Beatnik gone? [He’d just complained of a ‘beatnik’ playing bongos in his head.]
DiNozzo: Yeah.
Todd: Cool.

Think about that for a bit. At the beginning of the episode a senior agent gets his nose bent out of shape because his boss doesn’t introduce his team in order of seniority. That’s a minor matter. And that minor matter, nonetheless, gets resolved at the end of the show. That is very elegant writing, elegant craft.

In the large, a crime has been committed. A woman has been murdered, several women in fact. There is a breach in the social fabric; world is out of order. Gibbs’s team is called in. They solve the case. Order restored, fabric repaired.

And that’s what crime shows in general are about, restoring order in the world after a crime has been committed. Moreover, many of the crimes in NCIS are acts of terrorism. The show premiered in September 2003, two years after 9/11, an event that put terrorism on everyone’s mind – much like the nuclear arms race was on everyone’s mind at of the Cold War in the 1950s and 1960s (remember ‘duck and cover’ and civil defense and fallout shelters?). THAT is surely an aspect of NCIS’s general appeal and one that would be particularly appealing to Trump supporters, who are particularly anxious about the nation’s borders.

What’s particularly interesting about NCIS is that this dance of order and disorder is written into the show’s texture. DiNozzo’s status anxiety is a running motif in the show. Beyond that, consider the show’s three more or less intellectual characters: Medical Examiner Dr. Donald “Ducky” Mallard, NCIS Forensic Specialist Abby Sciuto, and Special Agent Tim McGee (MIT graduate and computer whiz). In a typical scene Gibbs will ask one of them What’s going on? They’ll start rambling on about this that and the other, mostly technical details, until Gibbs cuts them off and demands, What’s the point? He’s clearly annoyed and so, I strongly suspect, is the audience. I know I am.

The work they’re doing, their reasoning, is of course important in solving the crime. But Gibbs trusts them. He doesn’t need to know the details. He needs to know what they do next. The details just get in his way as he makes those decisions and takes those actions. When he cuts off their rambling and they cough up the goods, order is restored – not in the large, the crime at hand, but in the small, the texture of interaction between individuals. Of course, in bringing them to the point, Gibbs is also asserting his authority over them, the authority of the man of action over the rambling intellectual – another feature that presumably would be attractive to Trump voters (and to many others as well).

Personal interest vs. public duty

Let’s now turn our attention to a deeper issue. For it turns out that Gibbs himself has committed an egregious breach in the social fabric. When Gibbs was in the Marines his first wife and daughter were murdered by a Mexican drug-dealer. Gibbs in turn shot the drug-dealer, leaving an empty shell casing at the scene. He was never caught. Yes, the drug-dealer was a “bad hombre’; he deserved what he got. But, yes, Gibbs is guilty of murder.

Consider a scene in the next to the last episode, number 23, of season seven, “Patriot Down” [2]. Abby Sciuto, the Forensic Specialist, had been invited to Mexico to give a lecture. While there she was asked to look into a 20-year-old case involving a murdered drug dealer, the drug dealer Gibbs had murdered. She figured out that the shell casing must have come from Gibbs’ gun (don’t ask how, it’s complicated). We’re now in Abby’s lab. Gibbs has just consulted Abby on the current case and turns to leave. She prevents him from doing so (c. 36:33):

Abby Sciuto: The evidence in my report says that you killed Pedro Hernandez. And you're not even willing to talk to me about it.
Gibbs: I didn't think I needed to.
Abby: I owe you everything.
 You're Gibbs.
 No one needs to know the truth about the Hernandez investigation.
 I am willing to do anything for you.
 I just need you to tell me what to do.

Gibbs: No, you don't Abbs.
 I've only ever needed you to do one thing.

Abby: My job.
 But it's different this time. I mean it has to be, right?
Gibbs: No, it doesn't.
 [...]
Abby: Gibbs...
What do I do?

Gibbs: You send in the report to the task force. All of it.
Abby: I know. You shouldn't have to tell me, right?
Gibbs smiles and kisses her on the forehead.

That last gesture, Gibbs kissing Abby on the forehead, is that a blessing? He knows what will happen when that report is read by the Mexican officials. Abby was willing to protect him, but Gibbs tells her not to.

Why, when Abby made it clear that she was willing to protect him, did Gibbs refuse her offer? The only answer that makes sense is that Gibbs respects the law and the institutions it represents more than he values his personal liberty. Yes, he had broken the law once, years ago when he was grief-stricken at the loss of his family. Since then he’s spent twenty years defending the law. He’s changed.

That kind of decision has deep roots in Western culture. In one of his early dialogs, The Crito, Plato tells how Socrates had been condemned to death. His friend Crito visits him there and explains that he has made arrangements for Socrates to escape. Socrates refuses, arguing that he lived his life within the Athenian state and that it is the laws of Athens that gave his actions meaning, even though he may have criticized the state. For him to run from the state even though it had condemned him unjustly would be to undermine the foundation of his life.

The same with Gibbs. He shot and killed a man. He felt that he was justified in doing so and, I suspect, most of the people in the show’s audience would have felt so as well. But he broke the law. He has now spent two decades of his life in service to that law. For Gibbs to ask Abby to spike her report would be to make a mockery of the law and therefore of the last two decades of his life.

Recall the center of that conversation:

Abby: I am willing to do anything for you.
 I just need you to tell me what to do.

Gibbs: No, you don’t Abbs.
 I've only ever needed you to do one thing.

Abby: My job.
 But it’s different this time. I mean it has to be, right?
Gibbs: No, it doesn’t.

Gibbs is in effect telling her that her duty to the law, to the Naval Criminal Investigative Service, to the country, outweighs, must outweigh, her personal loyalty to him – and personal loyalty counts for a lot in Gibbs’s world. But he doesn’t actually say anything like that. It is up to the viewer to understand that that’s what’s going on.

What would Donald do?

Now, let us ask: If Donald Trump were in a similar situation, what would he have done? Just look at his conduct toward the Justice Department over the investigation of L’Affaire Russe, as Lawfare’s Ben Wittes likes to call it [1]. In particular, consider his treatment of James Comey, how he tried to recruit Comey’s personal loyalty and how he fired him when Comey did the right thing and refused. I have little doubt that if Trump had been in Gibbs’s place, he’d have ordered Abby to falsify her report and destroy the evidence. Donald Trump is no Leroy Jethro Gibbs.

Donald Trump walks all over that crucial distinction between his personal interests and his duty to the country as President. That is what is at issue in his refusal to release his tax returns and his refusal to divest himself of his business interests. That is what is at issue in those tweets from his personal account where he leaks, demeans, rages, contradicts, preens, parades, and bloviates all over the public record 140 characters at a time. That is what is at issue in making his son-in-law – the young hot-shot real estate mogul who made a dumb deal for a property at 666 Fifth Avenue – Roving Ambassador Plenipotentiary and General Fixer-Upper for Just About Everything. The list goes on and on. As far as I can tell, Trump is in it for the money and the adulation. He makes no distinction between service to Trump and service to the United States of America.

Why, then, if the real Donald Trump is so very different from the fictional Leroy Jethro Gibbs, why would any one attracted to NCIS vote for Trump?

In this particular instance, Gibbs was not in any danger when he told Abby to perform her legal duty. We learn in the next episode that someone else intercepted that report before it could reach Mexico. Gibbs was free. But still knows what he did, and so did Abby, and that someone else.

Of course, the audience couldn’t have known in episode 23 that the report would be quashed in episode 24. Still, this is a television show and TV shows have unspoken rules. One of them is that the central figure always comes out “clean” in the end. And Gibbs did. Remember – I certainly did – that, even if he stepped outside the law, he was revenging the death of his wife and daughter. That loss itself was a motif that appeared from time to time in the series. That’s what makes this bit of fictional flimflamming emotionally and aesthetically acceptable – to Trump voters and, I admit it, to me as well.

As for the distinction between one’s interests as an individual person and one’s duties and responsibilities as a law enforcement officer, I submit that, unless the distinction is already deep in your blood, as it were, it’s all but invisible. While NCIS makes the distinction, it doesn’t talk about it very much. And that, explicit talk, is important. Without such talk we can respond to Gibbs’s sense of duty – as, for example, we saw it in the first scene we looked at – without really thinking about his ability to distinguish between his personal interests and his public duty. I fear that Trump supporters are either oblivious to that distinction or the emotional satisfaction they get from Trump’s various actions and pronouncements outweighs their concern about the separation of public and private interests. He uses their fear and anxiety to feed his megalomania.

God save the Queen!

By way of contrast let’s look at The Crown, an original Netflix series about Queen Elizabeth II of England. The first season starts with Elizabeth as a Princess and shows her transition to being Queen. The distinction between her acts and feelings as a private individual and her duties and responsibilities as the Queen is a major, if not THE major, theme of the season.

In episode five, “Smoke and Mirrors”, we have the coronation [3]. Here she is discussing the ceremony with her husband, Philip. He urges her to televise the ceremony. She resists for awhile, then (c. 39:05):

Elizabeth: I'll support you in the televising.
Philip: You won't regret it.
Elizabeth: On one condition. That you kneel.
Philip: Who told you?
Elizabeth: My Prime Minister. He said you intended to refuse.
Philip: I merely asked the question. Whether it was right in this day and age that the Queen's consort, her husband, should kneel to her rather than stand beside her.
Elizabeth: You won't be kneeling to me.
Philip: That's not how it will look. That's not how it will feel. It will feel like a eunuch, an amoeba, is kneeling before his wife.
Elizabeth: You'll be kneeling before God and the Crown as we all do.
Philip: I don't see you kneeling before anyone.
Elizabeth: I’m not kneeling because I'm already flattened under the weight of this thing.
Philip: Oh, spare me the false humility. Doesn't look like that to me.
Elizabeth: How does it look to you?
Philip: Looks to me like you're enjoying it. It's released an unattractive sense of authority and entitlement that I have never seen before.
Elizabeth: And in you, it's released a weakness and insecurity I've never seen before.
Philip: Are you my wife or my Queen?
Elizabeth: I'm both.
Philip: I want to be married to my wife.
Elizabeth: I am both and a strong man would be able to kneel to both.
Philip: I will not kneel before my wife.
Elizabeth: Your wife is not asking you to.
Philip: But my Queen commands me?
Elizabeth: Yes.
Philip: I beg you make an exception for me.
Elizabeth: No.

This conversation, unlike the one between Gibbs and Abby, is quite explicit about the distinction between the private person and the public official. You can’t miss it.

For what it’s worth, The Crown was released on November 4, 2016, four days before the presidential election in the United States. I can’t help but thinking that that was timed to coincide with a likely win for Hilary Clinton, making her the first female head of state for the United States. That did not happen. But the issue at the center of that show has turned out to be agonizingly relevant to the election's outcome.

This distinction, between personal interest and public duty, is of course not specific to heads of state. In the modern world it applies to all government officials at all levels, from the local cop on the beat, to the county clerk, the mayor, the head of the transportation department (city, county, and state), the governor, and right back up through Congressmen and Senators, Cabinet officers, and the President and Vice-President. All of them, everyone, are enjoined from using their public positions for private purposes. Many of them fail in that duty, sometimes in minor, sometimes in major ways. That’s what corruption is, the use of one’s public position for private gain.

Nor is corruption merely a public offense. It is anathema, if all too common, in the world of private business as well. People working for corporations are not supposed to give favors to personal acquaintances.

The institutions of our society are built on that important and very fragile distinction. Your interests as a private individual must be kept separate from your duties as an employee of an organization, public or private. That is why Trump’s behavior is so egregious and so dangerous. He is a threat to The Constitution itself, not simply in this or that particular, but to its very existence. He undermines the rule of law.

He is no Leroy Jethro Gibbs. He is no Queen Elizabeth II. He’s an insecure bully from Queens who was born on third base and thinks he hit a home run. He is not worthy of the men and women who voted for him and he demeans and diminishes those who, in a desire to serve their country, serve directly under him.

He should be impeached. Only then can we restore honor and dignity to the Federal Government. Only then can we restore America’s place in the world.

* * * * *

[1] Lawfare is essential reading for anyone concerned about the vicissitudes of the Trump Presidency. Look at the items filed under Donald Trump. Concerning the point at issue in this essay, the distinction between the private individual and the public official, I recommend Benjamin Wittes and Quinta Jurecic, What Happens When We Don’t Believe the President’s Oath?, and Quinta Jurecic, Body Double: What Medieval Executive Theory Tells Us About Trump’s Twitter Accounts.

[2] For the dialog from this episode I used a transcript by bunniefuu from Forever Dreaming.

[3] For this dialog I used a transcript from Springfield! Springfield!

* * * * *

I have a number of posts at New Savanna about Trump and about NCIS.

Posted by Bill Benzon at 12:05 AM | Permalink | Comments (0)


Monday, August 07, 2017


How to thrive as a fox in a world full of hedgehogs

by Ashutosh Jogalekar

DownloadThe Nobel Prize winning animal behaviorist Konrad Lorenz once said about philosophers and scientists, “Philosophers are people who know less and less about more and more until they know nothing about everything. Scientists are people who know more and more about less and less until they know everything about nothing.” Lorenz had good reason to say this since he worked in both science and philosophy. Along with two others, he remains the only zoologist to win the Nobel Prize for Physiology or Medicine. His major work was in investigating aggression in animals, work that was found to be strikingly applicable to human behavior. But Lorenz’s quote can also said to be an indictment of both philosophy and science. Philosophers are the ultimate generalists, scientists are the ultimate specialists.

Specialization in science has been a logical outgrowth of its great progress over the last five centuries. At the beginning, most people who called themselves natural philosophers – the word scientist was only coined in the 19th century – were generalists and amateurs. The Royal Society which was established in 1660 was a bastion of generalist amateurs. It gathered together a motley crew of brilliant tinkerers like Robert Boyle, Christopher Wren, Henry Cavendish and Isaac Newton. These men would not recognize the hyperspecialized scientists of today; between them they were lawyers, architects, writers and philosophers. Today we would call them polymaths.

These polymaths helped lay the foundations of modern science. Their discoveries in mathematics, physics, chemistry, botany and physiology were unmatched. They cracked open the structure of cells, figured out the constitution of air and discovered the universal laws governing motion. Many of them were supported by substantial hereditary wealth, and most of them did all this on the side, while they were still working their day jobs and spending time with their families. The reasons these gentlemen (sadly, there were no ladies then) of the Royal Society could achieve significant scientific feats were many fold. Firstly, the fundamental laws of science still lay undiscovered, so the so-called “low hanging fruit” of science was ripe and plenty. Secondly, doing science was cheap then; all Newton needed to figure out the composition of light was a prism.

But thirdly and most importantly, these men saw science as a seamless whole. They did not distinguish much between physics, chemistry and biology, and even when they did they did so for the sake of convenience. In fact their generalist view of the world was so widespread that they didn’t even have a problem reconciling science and religion. For Newton, the universe was a great puzzle built by God, to be deciphered by the hand of man, and the rest of them held similar views.

Fast forward to the twentieth century, and scientific specialization was rife. You could not imagine Werner Heisenberg discovering genetic transmission in fruit flies, or Thomas Hunt Morgan discovering the uncertainty principle. Today science has become even more closeted into its own little boxes. There are particle astrophysicists and neutrino particle astrophysicists, cancer cell biologists, organometallic chemists and geomicrobiologists. The good gentlemen of the Royal Society would have been both fascinated and flummoxed by this hyperspecialization.

There is a reason why specialization became the order of the day from the seventeenth century onwards. Science simply became too vast, its tendrils reaching deep into specific topics and sub-topics. You simply could not flit from topic to topic if you were to understand something truly well and make important discoveries in the field. If you were a protein crystallographer, for instance, you simply had to spend all your time learning about instrumentation, protein production and software. If you were a string theorist, you simply had to learn pretty much all of modern physics and a good deal of modern mathematics. Studying any topic in such detail takes time and effort and leaves no time to investigate other fields. The rewards from such single-minded pursuit are usually substantial; satisfaction from the deep immersion that comes from expertise, the enthusiastic adulation of your peers, and potential honors like the Nobel Prize. There is little doubt that specialization has provided great dividends for its practitioners, both personal and scientific.

And yet there were always holdouts, men and women who carried on the tradition of their illustrious predecessors and left the door ajar to being generalists. Enrico Fermi and Hans Bethe were true generalists in physics, and Fermi went a step further by becoming the only scientist of the century who truly excelled in both theory and experiment; he would have made his fellow countryman Galileo proud. Then there was Linus Pauling who mastered and made seminal contributions to quantum chemistry, organic chemistry, biochemistry and medicine. John von Neumann was probably the ultimate polymath in the tradition of old natural philosophers, contributing massively to every field from pure mathematics and economics to computing and biology.

These polymaths not only kept the flame of the generalist alive, but they also anticipated science ironically coming full circle. The march of science from the seventeenth to the twentieth century might have been one toward increasing specialization, but in the last few years we have seen generalist science again blossoming. Why is this? Simply because the most important and fascinating scientific questions we face today require the meld of ideas from different fields. For instance: What is consciousness? What is life? How do you combat climate change? What is dark energy? These questions don’t just benefit from an interdisciplinary approach but they require it. Now, the way modern science approaches these questions is to bring together experts from various fields rather than relying on a single person who is an expert in all the fields. The Internet and global communication have made this kind of intellectual cross-pollination easier. And yet I would contend that there is a loss of insight when people keep excelling in their chosen fields and simply funnel the output of their efforts to other scientists without really understanding in what way it’s used. In my own field of drug discovery for instance, I have found that people who at least have a conceptual understanding of other areas are far more likely to contribute useful insights compared to those who simply do their job well and shove the product on to the next step of the pipeline.

I thus believe there is again a need for the kind of generalist who dotted the landscape of scientific research two hundred years ago. Both the poet Archilochus as well as the philosopher Isaiah Berlin have fortunately given us the right vocabulary to describe generalists and specialists. The fox, wrote Archilochus, knows many things while the hedgehog knows one big thing. Generalists are foxes; specialists are hedgehogs.

The history of science demonstrates that both foxes and hedgehogs are necessary for its progress. But history also shows that foxes and hedgehogs can alternate. In addition there are fields like chemistry which have always benefited more from foxes than hedgehogs. Generally speaking, foxes are more important when science is theory-rich and data-poor, while hedgehogs are more important when science is theory-poor and data-rich. The twentieth century was largely the century of hedgehogs while the twenty-first is likely to be the century of foxes.

Being a fox is not very easy though. Both personal and institutional forces in science have been built to support hedgehogs. You can mainly blame human resources personnel for contriving to make the playing field more suitable for these creatures. Consider the job descriptions in organizations. We want an “In vivo pharmacologist” or “Soft condensed matter physicist”, the job listing will say; attached would be a very precise list of requirements – tiny boxes within the big box. This makes it easier for human resources to check all the boxes and reject or accept candidates efficiently. But it makes it much harder for foxes who may not fit precise labels but who may have valuable insights to contribute to make it past those rigid labels. Organizations thus end up losing fine, practical minds who pay the price for their eclectic tastes. Academic training is also geared toward producing hedgehogs rather than foxes, and funding pressures on professors to do very specific kinds of research do not make the matter any easier. In general, these institutions create an environment in which being a fox is actively discouraged and in which hedgehogs and their intellectual children and grandchildren flourish.

As noted above, however, this is a real problem at a time when many of the most important problems in science are essentially interdisciplinary and would greatly benefit from the presence of foxes. But since institutional strictures don’t encourage foxes to ply their trade, they also by definition do not teach the skills necessary to be a fox. Thus the cycle perpetuates; institutions discourage foxlike behavior so much that the hedgehogs don’t even know how to be productive foxes even if they want to, and they in turn further perpetuate hedgehogian principles.

Fortunately, foxes in the past and present have provided us with a blueprint of their behavior. The essence of foxes is generalist behavior, and there are some commonsense steps one can take to inculcate these habits. Based on both historical facts about generalists as well as, well, general principles, one can come up with a kind of checklist on being a productive fox in an urban forest full of hedgehogs. This checklist draws on the habits of successful foxes as well as recent findings from both the sciences and the humanities that allow for flexible and universal thinking that can be applied not just in different fields but especially across their boundaries. Here are a few lessons that I have learnt or read about over the years. Because the lessons are general, they would not be confined to scientific fields.

1. Acknowledge psychological biases.

One of the most striking findings over the last three decades or so, exemplified by the work of Amos Tversky, Daniel Kahneman, Paul Slovic and others, is the tendency of human beings to make the same kinds of mistakes when thinking about the world. Through their pioneering research, psychologists have found a whole list of biases like confirmation bias, anchoring effects and representativeness that dog our thinking. Recognizing these biases doesn’t just help connect ideas across various disciplines but also helps us step back and look at the big picture. And looking at the big picture is what foxes need to do all the time.

2. Learn about statistics.

A related field of inquiry is statistical thinking. In fact, many of the cognitive biases which I just mentioned arise from the fundamental inability of human beings to think statistically. Basic statistical fallacies include: extrapolating from small sample sizes, underestimating or ignoring error bars, putting undue emphasis on rare but dramatic effects (think terrorist attacks), inability to think across long time periods and ignoring baselines. In an age when the news cycle has shrunk from 24 hours to barely 24 seconds of our attention span, it’s very easy to extrapolate from random, momentary exposure to all kinds of facts, especially when the media’s very existence seems to depend on dramatizing or exaggerating them. In such cases, stepping back and asking oneself some basic statistical questions about every new fact can be extremely helpful. You don't have to actually be able to calculate p values and confidence intervals, but you should know what these are.

3. Make back-of-the-envelope calculations.

When the first atomic bomb went off in New Mexico in July, 1945, Enrico Fermi famously threw a few pieces of paper into the air and, based on where the shockwave scattered them, came up with an accurate estimate of the bomb’s yield. Fermi was a master of the approximate calculation, the rough, order of magnitude estimate that would give the right ballpark answer. It’s illuminating how that kind of thinking can help to focus our thinking, no matter what field we may be dealing with. Whenever we encounter a fact that would benefit from estimating a number, it’s worth applying Fermi’s method to find a rough answer. In most cases it’s good enough.

4. Know your strengths and weaknesses.

As the great physicist Hans Bethe once sagely advised, “Always work on problems for which you possess an undue advantage.” We are always told that we should work on our weaknesses, and this is true to some extent. But it’s far more important to match the problems we work on with our particular strength, whether it’s calculation, interdisciplinary thinking or management. Leveraging your strengths to solve a problem is the best way to not get bogged down in one place and being able to nimbly jump across several problems like a fox. Hedgehogs often spend their time not just honing their strengths but working on their weaknesses; this is an admirable trait, but it’s not always the most optimal for working across disciplinary boundaries.

5. Learn to think at the emergent level that’s most useful for every field.

If you have worked in various disciplines long enough, you start realizing that every discipline has its own zeitgeist, its own way of doing things. It’s not just about learning the technical tools and the facts, it’s about knowing how to pitch your knowledge at a level that’s unique and optimal for that field. For instance, a chemist thinks in terms of molecules, a physicist thinks in terms of atoms and equations, an economist thinks in terms of rational individuals and a biologist thinks in terms of genes or cells. That does not mean a chemist cannot think in terms of equations or atoms, but that is not the most useful level of thinking to apply to chemistry. This matching of a particular brand of thinking to a particular field is an example of emergent thinking. The opposite of emergent thinking is reductionist thinking which breaks down everything into its constituent parts. One of the discoveries of science in the last century is the breakdown of strict reductionism, and if one wants to be a productive fox, he or she needs to learn the right level of emergent thinking that applies to a field.

6. Read widely outside your field, but read just enough.

If you want to become a generalist fox, this is an obvious suggestion, but because it’s obvious it needs to be reiterated. Gaining knowledge of multiple fields entails knowing something about those fields, which entails reading about them. But it’s easy to get bogged down in detail and to try to become an expert in every field. This goal is neither practical nor the correct one. The goal instead is to gain enough knowledge to be useful, to be able to distill general principles, to connect ideas from your field to others. Better still, talk to people. Ask experts what they think are the most important facts and ideas, keeping in mind that experts have their own biases and can reach different conclusions.

A great example of someone who learnt enough about a complementary field to not just be useful but very good at his job was Robert Oppenheimer. Oppenheimer was a dyed-in-the-wool theorist, and at first had little knowledge of experiment. But as one of his colleagues said,

“He began to observe, not manipulate. He learned to see the apparatus and to get a feeling of its experimental limitations. He grasped the underlying physics and had the best memory I know of. He could always see how far any particular experiment would go. When you couldn’t carry it any further, you could count on him to understand and to be thinking about the next thing you might want to try.”

Oppenheimer thus clearly learnt enough about experimental physics to know the strengths and limitations of the field, imparting another valuable piece of advice: know the strengths and limitations of every field at the very least, so you know whether the connections you are forming are within its purview. In other words, know the domain of applicability of every field so that you can form reasonable connections.

7. Learn from your mistakes, and from others.

If you are a fox trying to jump across various disciplinary boundaries, it goes without saying that you might occasionally stumble. Because you lack expertise in many fields you are likely to make mistakes. This is entirely understandable, but what’s most important is to acknowledge those mistakes and learn from them. In fact, making mistakes is often the best shortcut to quick learning (“Fail fast”, as they say in the tech industry). Learning from our mistakes is of course important for all of us, but especially so for foxes who are often intrinsically dealing with incomplete information. Make mistakes, revise your worldview, make new mistakes. Rinse and repeat. That should be your philosophy.

Parallel to learning from your mistakes is to learn from others. During her journey a fox will meet many interesting people from different fields who know different facts and possess different mental models of thinking about the world. Foxlike behavior often entails being able to flexibly use these different mental models to deal with various problems in different fields, so it’s key to keep on being a lifelong learner of these patterns of thought. Fortunately the Internet has opened up a vast new opportunity for networking, but we don’t always take advantage of this opportunity in serious, meaningful ways. Everyone will benefit from such deliberate, meaningful connections, but foxes in particular will reap rewards.

8. “The opposite of a big truth is also a big truth” – Niels Bohr

The world is almost always gray. Foxes must imbibe this fact as deeply as Niels Bohr imbibed quantum mechanics. Especially when you are encountering and trying to integrate disparate ideas from different fields, it’s very likely that some of them may seem contradictory. But often the contradiction is in our minds, and there’s actually a way to reconcile those ideas (as a general rule, only in the Platonic world of mathematics can contradictory ideas not be tolerated at all). The fact is that most ideas from the real world are fuzzy and ill defined, so it’s no surprise that they will occasionally run into each other. Not just ideas but patterns of thinking may seem contradictory; for example, what a biologist sees as the most important feature of a particular system may not be the most important feature for a physicist (emergence again). In most cases the truth lies somewhere in between, but in others it may lie wholly on one side. As they say, being able to hold opposite ideas in your mind at the same time is a mark of intelligence. If you are a fox, prove this.

These are but a few of the potential avenues that you can explore for being a generalist fox. But the most important principle that foxes can benefit from is, as the name indicates, general. When confronted by an idea, a system or a problem, learn to ask the most general questions about it, questions that flow across disciplines. A few of these questions in science are: What’s the throughput? How robust is the system? What are the assumptions behind it? What is the problem that we are trying to solve? What are its strengths and limitations? What kinds of biases are baked into the system and our thinking about it?

Keep on asking these questions, make a note of the answers and you will realize that they can be applied across domains. At the same time, remember that as a fox you will always work in tandem with specialized hedgehogs. Foxes will be needed to explore the uncharted territory of new areas of science and technology, hedgehogs will be needed to probe its corners and reveal hidden jewels. The jewels will further reflect light that will illuminate additional playgrounds for the foxes to frolic in. Together the two creatures will make a difference.

Posted by Ashutosh Jogalekar at 12:40 AM | Permalink | Comments (0)


Monday, July 24, 2017


In Favor of Small States – Are Meganations too Big to Succeed?

by Bill Benzon

One of the most interesting effects of the Trump presidency has been the response various cities and states have had to the Trump administration’s blindness to global warming: They have decided to bypass the federal government and go their own way on climate policy, even to the point of dealing with other nations. Thus Bill McKibben states, in “The New Nation-States”:

The real test will come in September next year, when “subnational” governments from around the world gather in California to sign the “Under2 MOU,” an agreement committing them to uphold the Paris targets. Launched in 2015 by California and the German state of Baden-Württemberg, the movement now includes everyone from Alsace to Abruzzo to the Australian Capital Territory; from Sichuan to Scotland to South Sumatra; from Manchester City to Madeira to Michoacán. Altogether: a billion people, responsible for more than a third of the world’s economic output. And every promise they make, sincere or not, provides climate activists with ammunition to hold each government accountable.

Moreover, the number of articles reporting on the weakening of the nation-state as a form of government seems on the rise – I link to a number of them at my home blog, New Savanna.

IMGP0274rd

Thomas H. Naylor, September 14, 2012

This would not be surprising to the late Thomas Naylor, a scholar and activist who taught economics at Duke University, Middlebury College, and the University of Vermont and who, as a consultant, advised major corporations and governments in over 30 countries. Naylor believed that nations such as the United States were too large to govern effectively and so should devolve into several smaller states. I am presently working with his estate to edit a selection of his papers and am reprinting one of them below. He completed it on December 3, 2012, a few days before he died from a stroke.

Secession Fever Spreads Globally

We should devote our efforts to the creation of numerous small principalities throughout the world, where people can live in happiness and freedom. The large states… must be convinced of the need to decentralize politically in order to bring democracy and self-determination into the smallest political units, namely local communities, be they villages or cities.
–Hans-Adam II, Prince of Liechtenstein, The State in the Third Millennium

Since the re-election of Barack Obama on November 6, 2012, over one million Americans have signed petitions on a White House website known as “We the People” calling for the secession of their respective states from the Union. Contrary to the view expressed by many politically correct liberals, this is not merely a knee-jerk, racist reaction of some Tea Party types to the re-election of Obama, but rather it is part of a well-defined trend. Today there are, in fact, 250 self-determination, political independence movements in play worldwide including nearly 100 in Europe alone, over 70 in Asia, 40 in Africa, 30 or so in North America, and 15 to 20 scattered on various islands scattered around the world. We could be on the brink of a global secession pandemic!

We live in a meganation world under the cloud of Empire, the American Empire. Fifty-nine percent of the people on the planet now live in one of the eleven nations with a population of over one hundred million people. These meganations in descending order of population size include China, India, USA, Indonesia, Brazil, Pakistan, Nigeria, Bangladesh, Russia, Japan, and Mexico. Extending the argument one step farther, we note that twenty-five nations have populations in excess of 50 million and that seventy-three percent of us live in one of those countries.

Most of these meganations have highly centralized relatively undemocratic governments such as is the case with the United States, China, and Russia. The United States is an autocracy disguised as a democracy but controlled by Wall Street, Corporate America, and various foreign interests. While pretending to be a democracy, the U.S. engages in the rendition of terrorist suspects, prisoner abuse and torture, the suppression of civil liberties, citizen surveillance, full spectrum dominance, and imperial overstretch. Its president has even granted himself the authority to order the assassination of anyone, anywhere, anytime, with no questions asked, no trial, no due process – just pure law of the jungle.

In addition, since the end of World War II a plethora of highly centralized, undemocratic international megainstitutions have evolved to deal with such issues as national security, peacekeeping, international finance, economic development, and international trade. They include the United Nations, the World Trade Organization, the World Bank, the International Monetary Fund, the European Union, and NATO. What these institutions have in common is not that they are too big to fail, rather they are too big to fix.

No doubt the implosion of the Soviet Union in 1991 and the breakup of Yugoslavia have contributed to the self-determination dynamic in Europe. Active separatist movements can now be found in Bavaria, Belgium, Bulgaria, England, Italy, Lapland, Poland, Romania, Scotland, and Spain. The situation has been exacerbated by the stagnant European economy, the fall of the euro, and increasing doubts about the European Union itself.

Scotland (U.K.), Flanders (Belgium), and Catalonia (Spain) are the most high-profile self-determination movements in Europe. The Scottish National Party has called for a 2014 referendum on Scottish independence. Recent elections in Catalonia provided additional momentum for a near-term referendum on Catalan self-determination. Last year Belgium went 535 days without a properly elected leader because of the toxicity in the relationship between the wealthier Dutch-speaking Flanders majority and the poorer French-speaking Flemish minority.

In Asia Bangladesh, China, Myanmar (twelve), India, Indonesia, Japan, and Pakistan all have political independence movements. Hong Kong, Tibet, and Xinjiang are the best known self-determination movements in China. Kurdish separatists can be found in Iraq, Turkey, and Iran. Indonesia granted East Timor its independence several years ago and also reached an agreement with Aceh which led to its dropping its claim for self-determination and eventually resulted in its dissolution. India is also awash with separatist movements even though secession is illegal there.

Hundreds of African tribes are trying to shake off artificial boundaries imposed on them by nineteenth-century European colonialism. Igbo, Ijaw, Ogani, and Yoruba are all separatist movements located in Nigeria. Sudan recently split into two parts.

For reasons which are not entirely clear, there seems to be less interest in Latin America in self-determination and political independence than in any other part of the world. Although there are a half dozen or so separatist movements in Brazil such as the City of São Paulo, the United States of Northeast, and Rio Grande do Sul, one does not have the impression that any of these groups are going anywhere. The one exception to the rule in Latin America is the Zapatista movement in the State of Chiapas in Mexico, the poorest state in the country. Since the 1990s, under the leadership of subcommandante Marcos and the Zapatista Army of National Liberation (EZLN), the Zapatistas have sought to transform Chiapas into an autonomous self-governing region which supports the political rights of Mexico’s native Indian population.

After a near-miss in its 1995 referendum to achieve independence from Canada, the Quebec separatist movement fell into the doldrums for over 15 years. However, in September 2012 the Parti Québécois won a victory of sorts in the Quebec provincial election and was able to put together a weak coalition government. The stability of the new government remains somewhat in doubt. There are also self-determination movements in Alberta and British Columbia.

As for the United States, for over twenty years I have argued that it was too big to manage and should be broken up. On October 9, 1990, three years before I moved to Vermont, the Bennington Banner published my piece entitled “Should the U.S. Be Downsized?” In 1997 William H. Willimon and I published Downsizing the U.S.A., which called for Vermont independence, and the peaceful dissolution of the American Empire. We argued that not only was the U.S. government too big, but that it had become too centralized, too powerful, too undemocratic, too militaristic, too imperialistic, too materialistic, and too unresponsive to the needs of individual citizens and small communities. However, since we were in the midst of the greatest economic boom in history, few Americans were interested in downsizing anything. The name of the game was “up, up, and away.” Only bigger and faster were thought to be better.

Prior to September 11, 2001, my call for Vermont self-determination and dissolution of the Empire fell mostly on deaf ears. It was as though I were speaking to an audience of one, namely myself. But George W. Bush’s ill-conceived, myopic, militaristic response to 9/11 created a window of opportunity to broach the subject of Vermont independence with left-leaning libertarians who might be receptive to the idea. Against the backdrop of the 2003 war with Iraq, we launched the Second Vermont Republic on October 11, 2003.

The Second Vermont Republic is a nonviolent citizens’ network and think tank committed to (1) the peaceful breakup of meganations such as the United States, Russia, and China; (2) the political independence of breakaway states such as Quebec, Scotland, and Vermont; and (3) a strategic alliance with other small, democratic, nonviolent, affluent, socially responsible, cooperative, egalitarian, sustainable, ecofriendly nations such as Austria, Finland, and Switzerland which share a high degree of environmental integrity and a strong sense of community.

There are four reasons why supporters of SVR want to secede: First, the U.S. Government has lost its moral authority. It is owned, operated, and controlled by Wall Street, Corporate America, and the Likud Government of Israel. Second, the U.S. is unsustainable economically, environmentally, socially, morally, and politically. Third, it is too big to govern as is illustrated by Congressional gridlock. Fourth, it is, therefore, unfixable. Few Vermonters are enthralled by a White House that is obsessed with drones, death squads, F-35s, and kill lists.

By the time George W. Bush left office in 2009, there were at least 30 separatist movements in the United States. No doubt the secession petition drive has injected new life into all of these self-determination movements. The secession petition for Texas alone contains over 120,000 signatures. A dozen or so of the state petitions have over 25,000 signatures, the number required to trigger a White House response.

Could it be that Americans have not only rediscovered the right of self-determination but also the American Declaration of Independence as well? “Whenever any form of government becomes destructive…it is the right of the people to alter or to abolish it, and to institute a new government.” Alteration and abolishment include the right to disband, or subdivide, or withdraw, or create a new government.

So how is it possible that on the one hand there are nearly a dozen highly centralized meganations whose populations are spiraling upwards, while simultaneously over 250 self-determination movements worldwide aspire to split off from megastates such as China, India, Russia, and the United States?

Strange as it may seem, the field of thermodynamics may shed some light on the issue, notwithstanding the fact that I considered it to be the most obscure subject I ever studied when I was a student in the Columbia University School of Engineering back in the late 1950s.

According to the second law of thermodynamics, heat will always flow only from a hotter object to a colder object. More generally, the direction of spontaneous change in isolated systems of all sorts is always toward maximum disorder. This concept is known as entropy. Therefore, it is hardly surprising that large, highly centralized, undemocratic nations such as the United States, China, Russia, and India are starting to come unglued at the seams and will eventually descend into chaos.

The economic, financial, social, and political implications of all of this disorder could prove to be staggering. It could also unleash an unprecedented burst of freedom, energy, creativity, and productivity.

We are truly entering unchartered waters. Past trends are meaningless. There are no books or articles available to tell one how to navigate one’s ship through the turbulence created by a sea of secession movements.

Posted by Bill Benzon at 12:10 AM | Permalink | Comments (0)


Monday, July 17, 2017


Optimizing Ourselves into Oblivion

by Jalees Rehman

The short story "Anekdote zur Senkung der Arbeitsmoral" ("An anecdote about the lowering of work ethic") is one of the most famous stories written by the German author Heinrich Böll. In the story, an affluent tourist encounters a poorly clad fisherman who is comfortably napping in his boat. The assiduous tourist accidentally wakes up the fisherman while taking photos of the peaceful scenery – blue sky, green sea, fisherman with an old-fashioned hat – but then goes on to engage the lounging fisherman in a conversation. The friendly chat gradually turns into a sermon in which the tourist lectures the fisherman about how much more work he could be doing, how he could haul in more fish instead of lazing about, use the profits to make strategic investments, perhaps even hire employees and buy bigger boats in a few years. To what end, the fisherman asks. So that you could peacefully doze away at the beach, enjoying the beautiful sun without any worries, responds the enthusiastic tourist. Optimized

I remembered Böll's story which was written in the 1960s – during the post-war economic miracle years (Wirtschaftswunder) when prosperity, efficiency and growth had become the hallmarks of modern Germany – while recently reading the book "Du sollst nicht funktionieren" ("You were not meant to function") by the German author and philosopher Ariadne von Schirach. In this book, von Schirach criticizes the contemporary obsession with Selbstoptimierung (self-optimization), a term that has been borrowed from network theory and computer science where it describes systems which continuously adapt and "learn" in order to optimize their function. Selbstoptimierung is now used in a much broader sense in German culture and refers to the desire of individuals to continuously "optimize" their bodies and lives with the help of work-out regimens, diets, self-help courses and other processes. Self-optimization is a routine learning process that we all engage in. Successful learning of a new language, for example, requires continuous feedback and improvement. However, it is the continuous self-optimization as the ultimate purpose of life, instead of merely serving as  a means to an end that worries von Schirach.

She draws on many examples from Körperkult (body-cult), a slavish worship of the body that gradually replaces sensual pleasure with the purpose of discipling the body. Regular exercise and maintaining a normal weight are key factors for maintaining health but some individuals become so focused on tracking steps and sleep duration on their actigraphs, exercising or agonizing about their diets that the initial health-related goals become lose their relevance. They strive for a certain body image and resting heart rates and to reach these goals they indulge in self-discipline to maximize physical activity and curb appetite. Such individuals rarely solicit scientific information as to the actual health benefits of their exercise and food regimens and might be surprised to learn that more exercise and more diets do not necessarily lead to more health. The American Heart Association recommends roughly 30-45 minutes of physical activity daily to reduce high blood pressure and the risk of heart attacks and stroke. Even simple and straightforward walking is sufficient to meet these goals, there is no need for two-hour gym work-outs.

Why are we becoming so obsessed with self-optimization? Unfortunately, von Schirach's analysis degenerates into a diffuse diatribe against so many different elements of contemporary culture. Capitalist ideology, a rise in narcissism and egotism, industrialization and the growing technocracy, consumerism, fear of death, greed, monetization of our lives and social media are among some of the putative culprits that she invokes. It is quite likely that many of these factors play some role in the emerging pervasiveness of the self-optimization culture – not only in Germany. However, it may be useful to analyze some of the root causes and distinguish them from facilitators. Capitalist ideology is very conducive to a self-optimization culture. Creating beauty and fitness targets as well as laying out timelines to achieve these targets is analogous to developing corporate goals, strategies and milestones. Furthermore, many corporations profit from our obsession with self-optimization. Companies routinely market weight regimens, diets, exercise programs, beauty products and many other goods or services that generate huge profits if millions of potential consumers buy into the importance of life-long self-optimization. They can set the parameters for self-optimization – ideal body images – and we just obey. According to the German philosopher Byung-Chul Han, such a diffusion of market logic and obedience to pre-ordained parameters and milestones into our day-to-day lives results in an achievement society which ultimately leads to mental fatigue and burnout.  In the case of "working out", it is telling that a supposedly leisure physical activity uses the expression "work", perhaps reminding us that the mindset of work persists during the exercise period.

But why would we voluntarily accept these milestones and parameters set by others? One explanation that is not really addressed by von Schirach is that obsessive self-optimization with a focus on our body may represent a retreat from the world in which we feel disempowered. Those of us who belong to the 99% know that our voices are rarely heard or respected when it comes to most fundamental issues in society such as socioeconomic inequality, rising intolerance and other forms of discrimination or prejudice. When it comes to our bodies, we may have a sense of control and empowerment that we do not experience in our work or societal roles. Self-discipline of our body gives our life a purpose with tangible goals such as lose x pounds, exercise y hours, reduce your resting heart rate by z.

Self-optimization may be a form of Ersatzempowerment but it comes at a great cost. As we begin to retreat from more fundamental societal issues and instead focus on controlling our bodies, we also gradually begin to lose the ability to dissent and question the meaning of actions. Working-out and dieting are all about How, When and What – how do I lose weight, what are my goals, when am I going to achieve it. The most fundamental questions of our lives usually focus on the Why – but self-optimization obsesses so much about How, When and What that one rarely asks "Why am I doing this?" Yet it is the Why that gives our life meaning, and self-optimization perhaps illustrates how a purpose-driven life may lose its meaning. The fisherman prompted the tourist to think about the Why in Böll's story and perhaps we should do the same to avoid the trap of an obsessive self-optimization culture.

Reference:

von Schirach, Ariadne. Du sollst nicht funktionieren: für eine neue Lebenskunst. Klett Cotta, 2014.

Posted by Jalees Rehman at 12:30 AM | Permalink | Comments (0)


Monday, June 19, 2017


Working On The Blockchain Gang, Part 1

by Misha Lepetic

"These are the guys that were 
too tough for the chain gang
."
 ~ Bomber

Cg1Back in the mists of time, at the dawn of the World Wide Web, the promise of an open, decentralized, disaggregated network seemed to stretch limitlessly past the horizons of doubt and cynicism. Most iconically, John Perry Barlow's A Declaration of the Independence of Cyberspace began with the stirring, uh, rejection: "Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather."

Suffice to say, this stern invocation has not aged well. For one thing, Barlow's declaration is merely concerned with governments, and doesn't mention corporations. Perhaps this was because Barlow delivered these remarks at the World Economic Forum in Davos, Switzerland, in 1996, and the matter required a certain deference. Perhaps he was under the sway of the idea - fashionable at the time - that history had indeed ended, with democracy and neoliberalism the unquestioned victors. Perhaps corporations, and capital generally, were not such a matter of concern twenty years ago as they are today. Nevertheless, in only a few years, the Wild West promise of the Web led to the giant pile-on of capital that would fuel the first dotcom bubble and its subsequent collapse, around 2001. 

The resurrection of internet entrepreneurship following that first, intemperate bender resulted in a different model, with a somewhat subtler promise. ‘Web 2.0', as it was popularized circa 2004, was premised on the idea that information was no longer static, and that participants could interact with content, and that content could be assembled, on the fly, for a specific viewer. A bit further behind the scenes, Web 2.0 benevolently assumed a rich ecosystem of application programming interfaces that would allow for seamless communication of data requests between platforms that were burgeoning with information. You can think of an API as recipe book for how to interact with a given site's data, or a membrane that allows certain requests and not others. 

So our notion of the Web was abstracted upwards, from scrappy libertarians doing whatever they wanted in some curiously disembodied space, to one that was more about to giving platforms the freedom they needed to interact with one another. This had several consequences (and I realize that I am being very simplistic here, but bear with me for the sake of the subsequent argument). On the one hand, the stage was set for the evolution of social media networks, which are more or less the ne plus ultra of Web 2.0. On the other, and inseparable from the first, was the growth of the vast and unregulated infrastructure that tracked users' online behavior.

Eventually, most every click, transaction and purchase would come to be harvested, mostly for the benefit of targeted advertising, but, following Edward Snowden's 2013 revelations of NSA surveillance, who is to say for what other purposes? In any event, it's not unreasonable to posit that your data has been bought and sold many, many times over the course of, say, the last decade, if not longer, and this is not changing any time soon.

It should also be mentioned that, in the course of the growth of social media, the previous notion of the ‘open Web' was utterly and decisively sacrificed. Alexis Madrigal's recent Atlantic piece goes into greater detail:

In June of 2007, the iPhone came out. Thirteen months later, Apple's App Store debuted. Suddenly, the most expedient and enjoyable way to do something was often tapping an individual icon on a screen. As smartphones took off, the amount of time that people spent on the truly open web began to dwindle… By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web… Most of the action [now] occurs within platforms like Facebook, Twitter, Instagram, Snapchat, and messaging apps, which all have carved space out of the open web.

Of course, being beholden to these giants means not just conducting the majority of one's online activities within the confines of a handful of sites. It also means that these sites are positioned to continue capturing the lion's share of revenue from these activities. As Madrigal notes, at the launch of the iPhone, five companies (Apple, Microsoft, Google, Facebook and Amazon) were worth $577 billion. Today, that value, in the form of market capitalization, is nearly $3 trillion.

*

Cg2Obviously, this is good news if you're a shareholder, or an employee (or both). It's not such good news, however, if you're an entrepreneur looking to build out the next innovative Web-based business. A startup's success depends on its eventual conversion to profitability, or its acquisition of sufficient market share, leading to a buyout. In both cases, the current context, where five companies set the terms for what is desirable scale, creates a paradox for startups: how to create sufficient momentum, when so much of consumers' attention seems to have been alreay acquired and walled off by those charmingly known as the ‘Five Horsemen'.

Fortunately, John Perry Barlow's libertarian ghost has never ceased haunting the circuitry of our global cyber-substrate. Not long after the launch of the iPhone, an anonymous coder going by the moniker Satoshi Nakamoto proposed BitCoin, or more accurately, the BitCoin protocol. Nakamoto, whose true identity may or may not have been revealed, intimated that he had achieved one of libertarianism's holy grails: the divorce of currency from government-or, for that matter, any institution. To do this, he welded together two separate concepts: the idea that computers would generate currency by performing calculations, and that the record of ensuing transactions of that currency would be held in common (animations always help here).

The first bit, known as ‘mining', is fairly easy to explain if we use the example of frequent flier miles. A traveler puts in a bid at a fixed price for a ticket and flies with the airline that awarded the bid. In return, the traveler earns a number of points, which can then be used to redeem discounts or free tickets for further travel. However, in this case the points are only applicable within the ecosystem of that airline - you can't transfer points to another airline, nor can you sell your points on an open, secondary market. BitCoin extends that model significantly. 

The second bit is much more interesting, and may in fact point to the next big paradigm shift in the battle of who ‘owns' the World Wide Web. This is the record of ensuing transactions, or, in BitCoin terms, the ‘distributed ledger'. In order to ensure that BitCoin remains autonomous, Nakamoto proposed that all transactions of the BitCoin system be held in common, throughout the same network of machines that are performing the mining function mentioned above. All transactions are visible, even while each transaction's participants are anonymous. As long the ledger sitting on every node matches up with every other node's copy, the system holds and BitCoin remains in business. No central authority is required - the network itself is the clearinghouse. And anyone can join the network and support the blockchain.

This technology is known as the ‘blockchain', simply because, after a certain period of time, each newly generated block of completed transactions is appended, or chained, to the preceding series of transaction blocks. Thus the entire history of the system resides on the system, available for inspection by any of its members. This entire arrangement is made possible by the confluence of a number of factors: the deployment of some clever cryptography that ensures anonymity; and the ever-decreasing costs for bandwidth, storage and processing power that enable computing to occur on a planetary scale. 

Cg3Where it gets interesting is when blockchain technology is deployed beyond currency applications. To be clear, the blockchain is insufficient by itself - even if we are looking at applications that are not financial in nature, there still has to be a system of incentives in place, incentives that will entice individuals to join, participate and grow the network. So while one of BitCoin's primary concerns is navigating the interface between the generation and transaction of virtual currency with the translation of that currency into other forms of currency, such as dollars or yen, other applications of blockchain can be incentivized by purely internal and abstract ‘tokens'. These tokens, earned or bought in much the same way that BitCoin is earned or bought, buys the rights to do things within the network. In a way, this is not dissimilar to our frequent flier example above - you earn the right for additional travel, but only within the context of the awarding system.

How does this fit into the earlier discussion, where I was bemoaning the loss of the ‘open Web' to the platform tyrannies of Amazon, et al? If we're to go by their proponents, these new ‘token networks' are exactly the cure for the trend of enclosure to which the Internet has been subject for the past decade. Thanks to the blockchain, recordkeeping can be anonymous, secure and decentralized. And thanks to tokens, incentives exist for individuals and groups to join the network and participate in its growth. The more valuable the network, the more actors will want to join - and the more valuable that network's token will become. 

Moreover, the rules are clear to anyone who wants to join, and no participant need worry about any central authority suddenly deciding to change the terms of service, or siphoning off fees or data to enrich shareholders who are really silent partners in an outdated rent-seeking scenario. Like anything else, policy is determined by the network. So it's easier to now see that token networks may be the most substantial challenge to the corporatist model of how the internet has evolved lately. At its most gloriously imagined, such a network virtually runs itself (in both senses of the term). All the trappings of corporate continuity, such as boards of directors, shareholder meetings and such, are eliminated. At least, that's how it's supposed to work.

Cg4All of this sounds terribly abstract, and I realize that I still haven't suggested what such a token network is supposed to do. It's difficult enough to wrap one's head around how BitCoin works in the first place, and when you remove the conceptual comfort that the notion of ‘currency' provides, it's not easy to divine what purpose this might have, except for perhaps separating Silicon Valley venture capitalists from their own, in fact very real, money (always a possibility). It's also valid to ask, How much of a fringe phenomenon is this? Will it really make a tangible difference in people's lives, or the economy in general? Or will it be yet another flash in the pan, breathlessly promoted by an increasingly out-of-touch tech culture that can't seem to propose solutions to anything remotely approaching a real-world problem?

Next month I'll look into these concerns, as well as examine a few examples of non-monetary token networks. I'll also examine the larger issues at stake, and speculate on how token networks might collide with the more established social, economic and especially political worlds. Until then, please consider these words from John Perry Barlow's manifesto, which actually have aged rather well: "In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat."

Posted by Misha Lepetic at 12:05 AM | Permalink | Comments (0)


Monday, June 12, 2017


If you believe Western Civilization is oppressive, you will ensure it is oppressive

by Ashutosh Jogalekar

6a01b8d282c1f3970c01bb09a4a601970d-320wi
Philosopher John Locke's spirited defense of the natural rights of man should apply to all men and women, not just one's favorite factions.

When the British left India in 1947, they left a complicated legacy behind. On one hand, Indians had suffered tremendously under oppressive British rule for more than 250 years. On the other hand, India was fortunate to have been ruled by the British rather than the Germans, Spanish or Japanese. The British, with all their flaws, did not resort to putting large numbers of people in concentration camps or regularly subjecting them to the Inquisition. Their behavior in India had scant similarities with the behavior of the Germans in Namibia or the Japanese in Manchuria.

More importantly, while they were crisscrossing the world with their imperial ambitions, the British were also steeping the world in their long history of the English language, of science and the Industrial Revolution and of parliamentary democracy. When they left India, they left this legacy behind. The wise leaders of India who led the Indian freedom struggle - men like Jawaharlal Nehru, Mahatma Gandhi and B. R. Ambedkar - understood well the important role that all things British had played in the world, even as they agitated and went to jail to free themselves of British rule. Many of them were educated at Western universities like London, Cambridge and Columbia. They hated British colonialism, but they did not hate the British; once the former rulers left they preserved many aspects of their legacy, including the civil service, the great network of railways spread across the subcontinent and the English language. They incorporated British thought and values in their constitution, in their educational institutions, in their research laboratories and in their government services. Imagine what India would have been like today had Nehru and Ambedkar dismantled the civil service, banned the English language, gone back to using bullock cart and refused to adopt a system of participatory democracy, simply because all these things were British in origin.

The leaders of newly independent India thus had the immense good sense to separate the oppressor and his instruments of oppression from his enlightened side, to not throw out the baby with the bathwater. Nor was an appreciation of Western values limited to India by any means. In the early days, when the United States had not yet embarked on its foolish, paranoid misadventures in Southeast Asia, Ho Chi Minh looked toward the American Declaration of Independence as a blueprint for a free Vietnam. At the end of World War 1 he held the United States in great regard and tried to get an audience with Woodrow Wilson at the Versailles Conference. It was only when he realized that the Americans would join forces with the occupying French in keeping Vietnam an occupied colonial nation did Ho Chi Minh's views about the U.S. rightly sour. In other places in Southeast Asia and Africa too the formerly oppressed preserved many remnants of the oppressor's culture.

Yet today I see many, ironically in the West, not understanding the wisdom which these leaders in the East understood very well. The values bequeathed by Britain which India upheld were part of the values which the Enlightenment bequeathed to the world. These values in turn went back to key elements of Western Civilization, including Greek, Roman, Byzantine, French, German and Dutch. And simply put, Enlightenment values and Western Civilization are today under attack, in many ways from those who claim to stand by them. Both left and right are trampling on them in ways that are misleading and dangerous. They threaten to undermine centuries worth of progress.

The central character of Enlightenment values should be common knowledge, and yet the fact that it seems worth reiterating them is a sign of our times.

To wit, consider the following almost axiomatic statements:

Freedom of speech, religion and the press is all-important and absolute.

The individual and his property have certain natural and inalienable rights.

Truth, whatever it is, is not to be found in religious texts.

Kings and religious rulers cannot rule by fiat and are constrained by the wishes of the governed.

The world can be deciphered by rationally thinking about it.

All individuals deserve fair trials by jury and are not to be subjected to cruel punishment.

The importance of these ideas cannot be overestimated. When they were first introduced they were novel and revolutionary; we now take them for granted, perhaps too much for granted. They are in large part what allow us to distinguish ourselves as human beings, as members of the unique creative species called Homo sapiens.

The Enlightenment reached its zenith in mid-eighteenth century France, Holland and England, but its roots go back deep into the history of Western Civilization. As far back as ancient Babylon, the code of Hammurabi laid out principles of justice describing proportionate retaliation for crimes. The peak of enlightened thought before the French enlightenment was in Periclean Athens. Socrates, Plato and Aristotle, Athens led the way in philosophy and science, in history and drama; in some sense, almost every contemporary political and social problem and its resolution goes back to the Greeks. Even when others superseded Greek and Roman civilization, traces of the Enlightenment kept on appearing throughout Europe, even in its dark ages. For instance, the Code of the Emperor Justinian laid out many key judicial principles that we take for granted, including a right to a fair trial, a right against self-incrimination and a proscription against trying someone twice for the same crime.

In 1215, the Magna Carta became the first modern document to codify the arguments against the divine authority of kings. Even as wars and revolutions engulfed Europe during the next five hundred years, principles like government through the consent of the governed, trial by jury and the prohibition of cruel and unusual punishment got solidified through trial and error, through resistance and triumph. They saw their culmination in the English and American wars of independence and the constitutions of these countries in the seventeenth and eighteenth centuries. By the time we get to France in the mid 1750s, we have philosophers like John Locke explicitly talking about the natural rights of men and Charles-Louis Montesquieu explicitly talking about the tripartite separation of powers in government. These principles are today the bedrock of most democratic republics around the world, Western and Eastern. At the same time, let us acknowledge that Eastern ideas and thinkers – Buddha and Confucius in particular – have also contributed immensely to humanity's progress and will continue to do. In fact, personally I believe that the concepts of self-control, detachment and moderation that the East has given us will, in the final analysis, supersede everything else. However, most of these ideas are personal and inward looking. They are also very hard to live up to for most mortals, and for one reason or another have not integrated themselves thoroughly yet into our modern ways of life. Thus, there is little doubt that modern liberal democracies as they stand today, both in the West and the East, are mostly products of Western Civilizational notions.

In many ways, the study of Western Civilization is therefore either a study of Enlightenment values or of forces – mainly religious ones – aligned against them. It shows a steady march of the humanist zeitgeist through dark periods which challenged the supremacy of these values, and of bright ones which reaffirmed them. One would think that a celebration of this progress would be beyond dispute. And yet what we see today is an attack on the essential triumphs of Western Civilization from both left and right.

Each side brings its own brand of hostility and hypocrisy to bear on the issue. As the left rightly keeps pointing out, the right often seems to forget about the great mass of humanity that was not only cast on to the sidelines but actively oppressed and enslaved, even as freedom and individual rights seemed to be taking root elsewhere for a select few. In the 17th and 18th centuries, as England and America and France were freeing themselves from monarchy and the divine rights of kings, they were actively plunging millions of men and women in Africa, India and other parts of the world into bondage and slavery and pillaging their nations. The plight of slaves being transported to the English colonies under inhuman conditions was appalling, and so was the hypocrisy of thinkers like Thomas Jefferson and George Washington who wrote about how all men are born equal while simultaneously keeping them unequal. Anyone who denies the essential hypocrisy of such liberal leaders in selectively promulgating their values would be intentionally misleading themselves and others.

Even later, as individual rights became more and more codified into constitutions and official documents, they remained confined to a minority, largely excluding people of color, indigenous people, women and poor white men and from their purview. This hadn't been too different even in the crucible of democracy, Periclean Athens, where voting and democratic membership were restricted to landowning men. It was only in the late twentieth century - more than two hundred years after the Enlightenment - that these rights were fully extended to all. That's an awfully long time for what we consider as basic freedoms to seep into every strata of society. But we aren't there yet. Even today, the right often denies the systemic oppression of people of color and likes to pretend that all is well when it comes to equality of the law; in reality, when it comes to debilitating life events like police stops and searches, prison incarceration and health emergencies, minorities, women and the poor can be disproportionately affected. The right will seldom agree with these facts, but mention crime or dependence on welfare and the right is more than happy to generalize their accusations to all minorities or illegal immigrants.

The election of Donald Trump has given voice to ugly elements of racism and xenophobia in the U.S., and there is little doubt that these elements are mostly concentrated on the right. Even if many right-wingers are decent people who don't subscribe to these views, they also don't seem to be doing much to actively oppose them. Nor are they actively opposing the right's many direct assaults on the environment and natural resources, assaults that may constitute the one political action whose crippling effects are irreversible. Meanwhile, the faux patriotism on the far right that worships men like Jefferson and Woodrow Wilson while ignoring their flaws and regurgitates catchy slogans drafted by Benjamin Franklin and others during the American Revolution conveniently pushes the oppressive and hypocritical behavior of these men under the rug. Add to this a perverse miscasting of individual and states' rights, and you end up with people celebrating the Confederate Flag and Jefferson Davis.

If criticizing this hypocrisy and rejection of the great inequities in this country's past and present were all that the left was doing, then it would be well and good. Unfortunately the left has itself started behaving in ways that aren't just equally bad but possibly worse in light of the essential function that it needs to serve in a liberal society. Let's first remember that the left is the political faction that claims to uphold individual rights and freedom of speech. But especially in the United States during the last few years, the left has instead become obsessed with playing identity politics, and both individual rights and free speech have become badly mangled victims of this obsession. For the, left individual rights and freedom of speech are important as long as they apply to their favorite political groups, most notably minorities and women. For the extreme left in particular, there is no merit to individual opinion anymore unless it is seen through the lens of the group that the individual belongs to. Nobody denies that membership in your group shapes your individual views, but the left believes that the latter basically has no independent existence; this is an active rejection of John Locke's primacy of the individual as the most important unit of society. The left has also decided that some opinions – even if they may be stating facts or provoking interesting discussion – are so offensive that they must be censored, if not by official government fiat, then by mass protest and outrage that verges on bullying. Needless to say, social media with its echo chambers and false sense of reassurance engendered by surrounding yourself with people who think just like you has greatly amplified this regressive behavior.

As is painfully familiar by now, this authoritarian behavior is playing out especially on college campuses, with a new case of "liberal" students bullying or banning conservative speakers on campus emerging almost every week. Universities are supposed to be the one place in the world where speech of all kinds is not just explicitly allowed but encouraged, but you would not see this critical function fulfilled on many college campuses today. Add to this the Orwellian construct of "micoroagressions" that essentially lets anyone decide whether an action, piece of speech or event is an affront to their favorite oppressed political group, and you have a case of full-blown unofficial censorship purely based on personal whims that basically stifles any kind of disagreement. It is censorship which squarely attacks freedom of speech as espoused by Voltaire, Locke, Adams and others. As Voltaire's biographer Evelyn Hall – a woman living in Victorian times – famously said, "I disapprove of what you say, but I will defend to the death your right to say it." Seemingly a woman in Victorian times - a society that was decidedly oppressive to women - had more wisdom to defend freedom of speech than a young American liberal in the twenty-first century.

This behavior threatens to undermine and tear apart the very progressive forces which the left claims to believe in. Notably, their so-called embrace of individual rights and diversity often seems to exclude white people, and white men in particular. The same people who claim to be champions of individual rights claim that all white men are "privileged", have too many rights, are trampling on others' rights and do not deserve more. The writer Edward Luce who has just written a book warning about the decline of progressive values in America talks about how, at the Democratic National Convention leading up to the 2016 U.S. election, he saw pretty much every "diversity" box checked except that belonging to white working class people; it was almost as if the Democrats wanted to intentionally exclude this group. For many on the left, diversity equates only to ethnic and gender diversity; any other kind of diversity and especially intellectual or viewpoint diversity are to be either ignored or actively condemned.This attitude is entirely contrary to the free exchange of ideas and respect for diverse opinions that was the hallmark of Enlightenment thinking.

The claim that white men have enough rights and are being oppressive is factually contradicted by the plight of millions of poor whites who are having as miserable a time as any oppressed minority. They have lost their jobs and have lost their health insurance, they have been sold a pipe dream full of empty promises by all political parties, and in addition they find themselves mired in racist and ignorant stereotypes. The left's drumbeat of privilege is very real, but it is also context-dependent; it can rise and ebb with time and circumstance. To illustrate with just one example, a black man in San Francisco will enjoy certain financial and social privileges that a white man in rural Ohio quite certainly won't: how can one generalize notions of privilege to all white men then, and especially those who have been dealt a bad hand? The white working class has thus found itself with almost no friend; rich white people have both Democrats and Republicans, rich and poor minorities largely have Democrats, but poor whites have no one and are being constantly demonized. No wonder they voted for Donald Trump out of desperation; he at least pretended to be their friend, while the others did not even put on a pretense. The animosity among white working class people is thus understandable and documented in many enlightening books, especially Arlie Hochschild's "Strangers in their Own Land". Even Noam Chomsky, who cannot be faintly accused of being a conservative, has sympathized with their situation and justifiable resentment. And as Chomsky says, the problem is compounded by the fact that not everyone on the left actually cares about poor minorities, since the Democratic party which they support has largely turned into a party of moneyed neoliberal white elites in the last two decades.

This singling out of favorite political groups at the expense of other oppressed ones is identity politics at its most pernicious, and it's not just hypocritical but destructive; the counter-response to selective oppression cannot also be selective oppression. As Gandhi said, an eye for an eye makes the whole world go blind. And this kind of favoritism steeped in identity politics is again at odds with John Locke's idea of putting the individual front and center. Locke was a creature of his times, so just like Jefferson he did not actively espouse individual freedom for indigenous people, but his idealization of the individual as the bearer of natural rights was clear and critical. For hundreds of years that individual was mostly white, but the response to that asymmetry cannot simply be to pick an individual of another skin color.

The general response on the left against the sins of Western Civilization and white men has been to consider the whole edifice of Western Civilization as fundamentally oppressive. In some sense this is not surprising since for many years, the history of Western Civilization was written by the victors; by white men. A strong counter-narrative emerged with books like Howard Zinn's "A People's History of the United States"; since then many others have followed suit and they have contributed very valuable, essential perspectives from the other side. Important contributions to civilizational ideas from the East have also received their dues. But the solution is not to swing to the other extreme and dismiss everything that white men in the West did or focus only on their sins, especially as viewed through the lens of our own times. That would be a classic ousting of the baby with the bathwater, and exactly the kind of regressive thinking that the leaders of India avoided when they overthrew the British.

Yes, there are many elements of Western Civilization that were undoubtedly oppressive, but the paradox of it was that Western Civilization and white men also simultaneously crafted many ideas and values that were gloriously progressive; ideas that could serve to guide humanity toward a better future and are applicable to all people in all times. And these ideas came from the same white men who also brought us colonialism, oppression of women and slavery. If that seems self-contradictory or inconvenient, it only confirms Walt Whitman's strident admission: "Do I contradict myself? Very well, then I contradict myself. I am large, I contain multitudes." We can celebrate Winston Churchill's wartime leadership and oratory while condemning his horrific orchestration of one of India's worst famines. We can celebrate Jefferson's plea for separation of church and state and his embrace of science while condemning his treatment of slaves; but if you want to dismantle statues of him or James Madison from public venues, then you are effectively denouncing both the slave owning practices as well as the Enlightenment values of these founding fathers.

Consider one of the best-known Enlightenment passages, the beginnings of the Declaration of Independence as enshrined in Jefferson's soaring words: "We hold these truths to be self-evident; that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty and the pursuit of happiness." It is easy to dismiss the slave-owning Jefferson as a hypocrite when he wrote these words, but their immortal essence was captured well by Abraham Lincoln when he realized the young Virginian's genius in crafting them:

"All honor to Jefferson--to the man who, in the concrete pressure of a struggle for national independence by a single people, had the coolness, forecast, and capacity to introduce into a merely revolutionary document, an abstract truth, applicable to all men and all times, and so to embalm it there, that to-day, and in all coming days, it shall be a rebuke and a stumbling-block to the very harbingers of re-appearing tyranny and oppression."

Thus, Lincoln clearly recognized that whatever his flaws, Jefferson intended his words to apply not just to white people or black people or women or men, but to everyone besieged by oppression or tyranny in all times. Like a potent mathematical theorem, the abstract, universal applicability of Jefferson's words made them immortal. In light of this great contribution, Jefferson's hypocrisy in owning slaves, while unfortunate and deserving condemnation, cannot be held up as a mirror against his entire character and legacy.

In its blanket condemnation of dead white men like Jefferson, the left also fails in appreciating what is perhaps one of the most marvelous paradoxes of history. It was precisely words like these, written and codified by Jefferson, Madison and others in the American Constitution, that gradually allowed slaves, women and minorities to become full, voting citizens of the American Republic. Yes, the road was long and bloody, and yes, we aren't even there yet, but as Martin Luther King memorably put it, the arc of the moral universe definitely bent toward justice in the long term. The left ironically forgets that the same people who it rails against also created the instruments of democracy and freedom that put the levers of power into the hands of Americans of all colors and genders. There is no doubt that this triumph was made possible by the ceaseless struggles of traditionally oppressed groups, but it was also made possible by a constitution written exclusively by white men who oppressed others: Whitman's multitudinous contradictions in play again.

Along with individual rights, a major triumph of Western Civilization and the Enlightenment has been to place science, reason, facts and observations front and center. In fact in one sense, the entire history of Western Civilization can be seen as a struggle between reason and faith. This belief in science as a beacon of progress was enshrined in the Royal Society's motto extolling skepticism: "Nullius in verba", or "Nobody's word is final". Being skeptical about kings' divine rights or about truth as revealed in religious texts was a profound, revolutionary and counterintuitive idea at the time. Enlightenment values ask us to bring only the most ruthless skepticism to bear on truth-seeking, and to let the facts lead us where they do. Science is the best tool for ridding us of our prejudices, but it never promises us that its truths would be psychologically comforting or conform to our preconceived social and political beliefs. In fact, if science does not periodically make us uncomfortable about our beliefs and our place in the universe, we are almost certainly doing it wrong.

Sadly, the left and right have both played fast and loose with this critical Enlightenment value. Each side looks to science and cherry-picks facts for confirming their social and political beliefs; each side then surrounds itself with people who believe what they do, and denounces the other side as immoral or moronic. For instance, the right rejects factual data on climate change because it's contrary to their political beliefs, while the left rejects data on gender or racial differences because it's contrary to theirs. The religious right rejects evidence, while the religious left rejects vaccination. Meanwhile, each side embraces the data that the other has rejected with missionary zeal because it supports their social agenda. Data on other social or religious issues is similarly met with rebuke and rejection. The right does not want to have a reasonable discussion on socialism, while the left does not want to have a reasonable discussion on immigration or Islam. The right often fails to see the immense contribution of immigration to this country's place in the world, while the left often regards any discussion even touching on reasonable limits to immigration as xenophobia and nativism.

The greatest tragedy of this willful blindness is that where angels fear to tread, fools and demagogues willingly step in. For instance, the left's constant refusal to engage in an honest and reasonable critique of Islam and its branding of those who wish to do this as Islamophobes discourages level-headed people from entering that arena, thus paving the way for bonafide Islamophobes and white supremacists. Meanwhile, the right's refusal to accept even reasonable evidence for climate change opens the field to those who think of global warming as a secular religion with special punishments for heretics. Both sides lose, but what really loses here is the cause of truth. Since truth has already become a casualty in this era of fake news and exaggerated polemics on social media, this refusal on both sides to accept facts that are incompatible with their psychological biases will surely sound the death knell for science and rationality. Then, as Carl Sagan memorably put it, unable to distinguish between what is true and what feels good, clutching our pearls, we will gradually slide, without even knowing it, into darkness and ignorance.

We need to resurrect the cause of Enlightenment values and Western Civilization, the values espoused by Jefferson, Locke and Hume, by Philadelphia, London and Athens. The fact that flawed white men largely created them should have nothing to do with their enthusiastic acceptance and propagation, since their essential, abstract, timeless qualities have nothing to do with the color of the skin of those who thought of them; rejecting them because of the biases of their creators would be, at the very least, replacing one set of biases with another.

One way of appreciating these values is to actually resurrect them with all their glories and faults in college courses, because college is where the mind truly forms. In the last 40 years or so, the number of colleges that include Western Civilization as a required course in their curriculum has significantly reduced. Emphasis is put instead on world history. It is highly rewarding to expose students to world history, but surely there is space to include a capsule history of the fundamental principles of Western Civilization as a required component of these curricula. Another strategy to leverage these ideals is to use the power of social media in a constructive manner, to use the great reaches of the Internet to bring together people who are passionate about them and who care about their preservation and transmission.

This piece may seem like it dwells more on the failures of the left than the right. For me the reason is simple: Donald Trump's election in the United States, along with the rise of similar authoritarian right-wing leaders in other countries, convinces me that at least for the foreseeable future, we won't be able to depend on the right to safeguard these values. Over the last few decades, conservative parties around the world and especially the Republican party in the United States have made their intention to retreat from the shores of science, reason and moderation clear. That does not mean that nobody on the right cares about these ideals, but it does mean that for now, the left will largely have to fill the void. In fact, by stepping up the left will in one sense simply be fulfilling the ideals enshrined by many of its heroes, including Franklin Roosevelt, Rosa Parks, Susan B. Anthony and John F. Kennedy. Conservatives in turn will have to again be the party of Abraham Lincoln and Dwight Eisenhower if they want to sustain democratic ideals, but they seem light years from being this way right now. If both sides fail to do this then libertarians will have to step in, but unfortunately libertarians comprise a minority of politically effective citizens. At one point in time, libertarians and liberals were united in sharing the values of individual rights, free speech, rational enlightenment and a fearless search for the truth, but the left seems to have sadly ceded that ground in the last few years. Their view of Western Civilization has become not only one-sided but also fundamentally pessimistic and dangerous.

Here are the fatal implications of that view: If you think Western Civilization is essentially oppressive, then you will always see it as oppressive. You will always see only the wretchedness in it. You will end up focusing only on its evils and not its great triumphs. You will constantly see darkness where you should see light. And once you relinquish stewardship of Western Civilization, there may be nobody left to stand up for liberal democracy, for science and reason, for all that is good and great that we take for granted.

You will then not just see darkness but ensure it. Surely none of us want that.

Posted by Ashutosh Jogalekar at 12:50 AM | Permalink | Comments (0)


Monday, May 22, 2017


Dismantle the Poverty Trap by Nurturing Community Trust

by Jalees Rehman

6a017c344e8898970b01bb099dc807970d-320wiWould you rather receive $100 today or wait for a year and then receive $150? The ability to delay immediate gratification for a potentially greater payout in the future is associated with greater wealth. Several studies have shown that the poor tend to opt for immediate rewards even if they are lower, whereas the wealthy are willing to wait for greater rewards. One obvious reason for this difference is the immediate need for money. If food has to be purchased and electricity or water bills have to be paid, then the instant "reward" is a matter of necessity. Wealthier people can easily delay the reward because their basic needs for food, shelter and clothing are already met.

Unfortunately, escaping from poverty often requires the ability to delay gratification for a greater payout in the future. Classic examples are the pursuit of higher education and the acquisition of specialized professional skills which can lead to better-paying jobs in the future. Attending vocational school, trade school or college paves the way for higher future wages, but one has to forego income during the educational period and even incur additional debt by taking out educational loans. Another example is of delayed gratification is to invest capital - whether it is purchasing a farming tool that increases productivity or investing in the stock market – which in turn can yield greater pay-out. However, if the poor are unable to pursue more education or make other investments that will increase their income, they remain stuck in a vicious cycle of increasing poverty.

Understanding the precise reasons for why people living in poverty often make decisions that seem short-sighted, such as foregoing more education or taking on high-interest short-term loans, is the first step to help them escape poverty. The obvious common-sense fix is to ensure that the basic needs of all citizens – food, shelter, clothing, health and personal safety – are met, so that they no longer have to use all new funds for survival. This is obviously easier in the developed world, but it is not a trivial matter considering that the USA – supposedly the richest country in the world – has an alarmingly high poverty rate. It is estimated that more than 40 million people in the US live in poverty, fearing hunger and eviction from their homes. But just taking care of these basic needs may not be enough to help citizens escape poverty. A recent research study by Jon Jachimowicz at Columbia University and his colleagues investigated "myopic" (short-sighted) decision-making of people with lower income and identified an important new factor: community trust.

The researchers first used an online questionnaire (647 participants) to assess trust and asked participants to choose between a payoff in the near future that is smaller and a larger pay-off in the distant future. They also measured community trust by asking participants to agree or disagree with statements such as "There are advantages to living in my neighborhood" or I would like my child(ren) to be raised in the neighborhood I currently live in". They found that lower income participants were more likely to act in a short-sighted manner if they had low levels of trust in their communities. In a second online experiment, the researchers recruited roughly 100 participants from each state in the US and assessed their community trust levels. They then obtained real-world data on payday loans – a sign of very short-sighted financial decision-making because people take out cash advances at extraordinarily high interest rates that have to be paid back when they get their paycheck – for each state. They found that the average community trust for each state was related to the use of payday loans. In states with high average community trust ratings, people were less likely to take out these payday loans, and this trend remained even when the researchers took into account unemployment rates and savings rates for each state.

Even though these findings all pointed to a clear relationship between community trust and sound financial decision-making, the results did not prove that increased community trust is an underlying cause that helps improve the soundness of financial decisions. To test this relationship in a real-world setting, the researchers conducted a study in rural Bangladesh by collaborating with an international development organization based in Bangladesh. The vast majority of participants in this study were poor even by Bangladeshi standards, earning less than $1/day per household member. The researchers adapted the community trust questionnaire and the assessment of financial decision-making for the rural population, with live interviewers asking the questions and filling out the responses for the participants. After assessing community trust and the willingness to delay financial rewards for greater payouts in the future, half of the participants received a two year intervention to increase community trust. This intervention involved volunteers from the community that acted as intermediaries between the local government and the rural population, providing input into local governance and community-level decisions (for example in the distribution of social benefits and the allocation of funds for development projects).

At the end of the two year period, participants who had received the community intervention showed significant increases in their community trust levels and they also improved their financial decision-making. They were more likely to forego immediate lower financial rewards for greater future rewards when compared to the villagers who did not receive any special intervention.

By combining correlational data from the United States with an actual real-world intervention to build community trust, the researchers show how important it is to build trust when we want to help fellow humans escape the "poverty trap". This is just an initial study with a limited group of participants and a narrow intervention that needs to be replicated in other societies and with long-term observation of the results to see how persistent the effects are. But the results should make all of us realize that just creating "jobs, jobs, jobs" is not enough. We need to invest in the infrastructures of communities and help citizens realize that they are respected members of society with a voice. Empowering individuals and ensuring their safety, dignity and human rights are necessary steps if we are serious about battling poverty.

Reference

Jachimowicz, J. M., Chafik, S., Munrat, S., Prabhu, J. C., & Weber, E. U. (2017). Community trust reduces myopic decisions of low-income individuals. Proceedings of the National Academy of Sciences, 201617395.

Posted by Jalees Rehman at 12:20 AM | Permalink | Comments (0)

Under The Radar, Part 2

by Misha Lepetic

"Machines…quell the revolt of specialized labor."
 ~ Marx

Punch-clock2In my previous post, I wrote about alternative ways of viewing the encroaching effects of automation on employment. I suggested that, instead of viewing it as a zero-sum game, with industry hell-bent on automating everyone's jobs out of existence, that it is rather a phenomenon driven by firms' needs to maintain profitability and market share. In this sense, automation - and technology more generally - is an optimization function, but only in a ‘local' sense. The character of employment required by a firm is only commensurate to the needs that it can foresee in the near future. So for all the talk of a ‘post-work' future, we won't get there any time soon.

Nevertheless, this leaves open an important succeeding question: What does the technological substitution of labor actually look like, and what, if anything, can be done about it? The first thing that ought to be made clear is that the process of substitution is neither neat nor obvious. Introducing a single robot into the workplace does not necessarily displace a single human being. Indeed, in the case of industrial manufacturing, it may be more: a factory making cell phone parts in Dongguan, China, recently automated much of its operations and saw its headcount plummet from 650 to 60 workers. In a further blow against humanity, the output of the factory increased nearly threefold, and product defect rates declined from 25% to less than 5%. 

It's worth noting that a factory making cellphone parts is an ideal subject for automation. A fully automated factory floor is the final reductio that, one might argue, began with Adam Smith's exposition of the power of the division of labor. But regardless of the factory's output - whether it's Smith's pins or components for mobiles - the fact is that we are making the same thing, thousands of times over. However, while significant, this kind of specialized manufacturing is but a fraction of global economic output. 

As one leaves the carefully controlled confines of a plant, technological substitution becomes less effective. Consider the phenomenon of driverless cars, another favorite bogeyman of automation's Cassandras. To continue with the example of the above factory, let's look at the problem of distribution. The firm may opt to replace its drivers with autonomous vehicles, but at the moment there is an enormous difference between designing a self-driving car that will handle the predictability of long stretches of open road, versus the intricacies of city driving, where potholes, unpredictable pedestrians and other phenomena create much riskier scenarios. And in the case of firms whose very business model is distribution (such as UPS), a human being is still needed to perform the final handoff of the package to its recipient. 

These fairly obvious remarks nevertheless intend to illustrate a larger point: automation is never not human-assisted. The question then becomes what proportion of a job is automated: as in the case of the truck driver, a journey may be 80% long haul, which is handled by the automated system, but the remaining 20% of navigating urban areas, or delivering the product to its recipient, is still the driver's ‘job'. For the firm, this is a decidedly awkward position. Whereas the factory is an ideal type - automate everything, fire 90% of your staff, and watch productivity and quality soar - interaction with the supply chain, or customers, or just the world itself, still requires people, and people expect to be paid. 

*

Punch-ClockInterestingly, this process is not just manifest in the stubbornly physical world, but also services that we might at first blush consider to be the exclusive domain of code. I am thinking about such things as chatbots, content monitoring of social media, and training of artificial intelligences on data sets of one sort or another. Writing in the Harvard Business Review, Mary Gray and Siddharth Suri note that

The truth is, AI is as "fully-automated" as the Great and Powerful Oz was in that famous scene from the classic film, where Dorothy and friends realize that the great wizard is simply a man manically pulling levers from behind a curtain. This blend of AI and humans, who follow through when the AI falls short, isn't going away anytime soon.

Shreeharsh Kelkar puts it another way: technology in the workplace is not apart from labor, and the interaction between labor and technology should be seen as

…an assemblage that embodies a reconfigured version of human-machine relations where humans are constructed, through digital interfaces, as flexible inputs and/or supervisors of software programs that in turn perform a wide-variety of small-bore high-intensity computational tasks (involving primarily the processing of large amounts of data and computing statistical similarities). It is this reconfigured assemblage that promises to change our workplaces, rather than any specific technological advance. The [research] agenda has been to concentrate on the human labor that makes this assemblage function, and to argue that it is precisely the invisibility of this labor that allows the technology to seem autonomous.

Crucially, the fact is that this incomplete technological substitution is furthermore a dynamic process, and one that sees employees' contributions as ever-receding, where work is never stable, but rather occupying a margin that is not unlike piecework. If your job is really about "doing the things that automation can't do…yet" then all sorts of other things break down. The idea of mastery of a profession is eroded, and the prospects of a stable career are diminished. Riffing off of Marx, the laborer is not only alienated from the product of their labor, but is further alienated by the processes of capital that allow ever less consequential input into the creation of that product. On a larger scale, we can speculate that the ever-elaborating infusion of technology into what were human-only tasks leads to, as Gray and Suri put it, "the rapid creation and destruction of temporary labor markets for new types of humans-in-the-loop tasks." 

The speed of this creation and destruction is a defining feature of the current situation, and questions the effectiveness of older, established programs that were designed to aid worker retraining, such as the Trade Adjustment Assistance Program, first funded in 1974. When one takes into account that, according to  more than 90% of jobs created between 2005 and 2015 were contract gigs (ie, not full-time), "Retraining for what?" becomes a legitimate question. Obviously, these part-time arrangements do not all belong to the ‘human-machine assemblage' postulated above, but at the same time this does not lend comfort to the recevied wisdom that technology will continue to create new opportunities for labor that are equal to or better than what came before. In fact, that position has come increasingly under attack.

*

8d27970vJust as technology cannot be viewed as monolithic, neither can the workforce. It's worth asking who will bear the brunt of these changes, and what recourse there might be. In software there is the concept of LIFO, or ‘last in, first out', used to describe the order in which items can be added to and removed from a data structure. This idea may be applied just as easily to the workforce - those who are the most recent arrivals tend to have the most tenuous hold. A recent piece in Foreign Policy speculated on the implications of automation on women, who entered the workforce substantially only during the mid-20th century:

Women are projected to take the biggest hits to jobs in the near future, according to a World Economic Forum report predicting that 5.1 million positions worldwide will be lost by 2020… Men will see nearly 4 million job losses and 1.4 million gains (approximately one new job created for every three lost). In comparison, women will face 3 million job losses and only 0.55 million gains (more than five jobs lost for every one gained).

One could make similar arguments for other segments of the labor force that have faced structural challenges, for example, minorities, immigrants and those possessing only a high school education. Moreover, the rapidity with which temporary labor markets will continue to evolve privileges the agile, who can not only adapt to new work, but can also physically relocate to new markets. Unfortunately, there is strong evidence that labor mobility has been declining across the United States since the 1970s. (This is yet another strong argument for universal healthcare.)

As far as recourse goes, labor is vastly ill-prepared. As Brishen Rogers noted in a thoughtful review of automation, unemployment and the prospects for universal basic income written for the Boston Review, 

Our labor and employment laws still envision the economy of the 1930s, which was dominated by massive industrial firms with hundreds of thousands of direct employees. Those laws rarely touch modern "fissured" work relationships such as Uber's relationship with its drivers, Walmart's relationship with its suppliers' workers, or McDonald's relationship with its franchisees' workers. Those laws also limit workers' ability to unionize or bargain effectively since they encourage bargaining at the firm or even plant level whereas today's modal workplace is growing ever smaller. Workers have fewer and fewer means to exert power on their own behalf.

In fact, this idea of power, or rather powerlessness, is perhaps the single greatest indicator of the difficulties in store for the labor force. With unions in disarray, the technologically-driven deskilling of the workforce continues apace. For example, a New York Times profile of Travis Kalanick, the CEO of ridesharing service Uber, noted that "roughly a quarter of [Uber's] drivers turn over on average every three months. According to an internal slide deck on driver income levels…Uber considered Lyft and McDonald's its main competition for attracting new drivers." Uber's technology platform is so easy to use (and its recruitment process so tolerant) that, rudely put, if you can flip burgers you can also be an Uber driver, and vice versa. 

BusterThe point about technology as a form of power over labor cannot be overstated. This is, in fact, its primary consequence. As Brishen Rogers notes, "technology is not a substitute for menial labor in this story but rather one among many tools to keep labor costs down by exerting power over workers." In order to effectively interface with encroaching automation, it is necessary that human interactions with technology be measured, evaluated, stored and recalled whenever needed. Thus the Guardian observes that

In the logistics sector, companies are using technology not to replace warehouse staff and couriers, but to put them under increasing surveillance to control their working patterns, reducing employee autonomy, skill and dignity. Wrist-based technology allows bosses to monitor activity minute-by-minute, including bathroom breaks.

If labor is to formulate new and effective means of dissenting from the emergent status quo, it is here that the battle must be met. The fully automated factory, while most plainly visible, is nevertheless a red herring compared to the myriad ways in which labor has already submitted to a status that grows ever more fraught with contingency. When he wrote "Machines were the weapon employed by the capitalists to quell the revolt of specialized labor," Marx was not thinking of the wristbands that Amazon workers wear, but I can't imagine he would have been that surprised, either.

Posted by Misha Lepetic at 12:05 AM | Permalink | Comments (0)


Monday, May 15, 2017


Why We Should Repeal Obamacare and not Replace It with Another Insurance Plan: Thinking Out of the Box for a Health Care Solution

by Carol A Westbrook

Before you, progressive reader, quit in disgust after reading the title, or you, conservative reader, quit in disgust after reading a few more paragraphs, please hear me out. I'm proposing that we repeal Obamacare (The Affordable Care Act, ACA) but not replace it with another medical insurance program. Instead, I propose that we re-think the entire concept of how we provide health care in this country.    110126_obama_sign_health_bill_ap_605

The ACA's stated purpose is "to ensure that all Americans have access to high-quality, affordable health care." Regardless of whether or not you believe good health is a fundamental human right, it is inexcusable for an affluent, first world country like ours not to provide it for its citizens. The good health of our nation is vitally important to its success, guaranteeing as it does a capable workforce, a strong military, and a healthy upcoming generation. However, I have seen the results of Obamacare from many perspectives, including that of a physician provider in a rural community, as well as that of a personal user of both insurance and Medicare. I do not believe the ACA succeeded in meeting its objectives.

It is true that the ACA provided health care insurance for millions of Americans who didn't have it previously, expanded Medicaid for the uninsured, got rid of the pre-existing condition exclusions, allowed our grown adult children to remain on our policies longer, and started the ball rolling on electronic records. These are great results.

GTY-Obamacare2-MEM-161222_12x5_1600But the ACA also caused the cost of health insurance to skyrocket, caused many people to lose their coverage, and, for some, their jobs. It forced many small doctors' practices to close, especially in rural areas, resulting in an overall decline in the quality of care in many regions. It limited patients' choices of physicians and hospitals, separating patients from their longstanding doctors. There were no checks on health care costs, which even today continue to increase. But worst of all, it mandated that our health care would be taken out of the hands of doctors and put into the hand businessmen--the insurance companies.

To elaborate on these points:

1. Obamacare's requirement that insurance companies determine our health care meant that decisions would be based primarily on the companies' profits, rather than on their customers medical needs. In other words, your insurer determines which provider, lab, or hospital you can use, which drugs it will reimburse, and how much it will pay out for your claims, always with an eye on their bottom line. And every dollar that a business keeps for its shareholders or its executives' salaries is one less dollar that is paid out for your health care. Your illnesses are subsidizing an industry.

2. Medical insurance is not a health care plan. It is insurance, which means it is a shared risk program. By definition, insurance collects money from its participants, and uses this pool to pay for its participants' medical costs. However, if the only people who sign up for insurance are ill and need payout, then there won't be enough money in the pool and everyone will have to pay more, or they their insurance pool can pay only a fraction of the costs. The result is higher premiums and higher co-pays, or company will go out of business. Both things happened with the ACA. We saw Obamacare premiums skyrocket after the first year, and private insurance followed suit. Many insurance companies chose to leave the Obamacare market, and in some areas only one insurer remained, leaving no market cost.

3. Health insurance in the US is traditionally tied to a job. Because of the ACA mandate that businesses with 50 or more full-time employees provide insurance to the full-time workers, many either lost their jobs or had their hours cut so the business' number of employees would drop below the mandate to avoid the high costs. Additionally, many workers have seasonal work, and never had benefits. With lower incomes and some assets such as savings, cars, and homes, these uninsured workers were not destitute, but they couldn't afford the high Obamacare premiums but their income was still be too high for a subsidy. Previously these working poor would take their chances; now, they have to pay tax penalties or buy high-cost health insurance, neither of which they can afford. 

4. Obamacare did nothing to contain health care costs, which continue to rise. It's simple economics: with an increased number of insured, leading to increased demand for services in a fixed background of providers, the cost of these services increases. Unlike Medicare, no checks were built into what hospitals or pharmacies could charge.

5. Obamacare required implementation of provisions such as fully electronic medical records, electronic prescribing, and participation in large "accountable care organizations." Such mandates were impractical, unaffordable, or impossible for independent physicians and small practices, especially those in underserved rural areas. Many were forced out of business.

6. The combination of an increased demand for health care in a stable field, along with the closing of many practices, led to people turning to for-profit care centers, pharmacist providers, and poorly trained PA's and nurse practitioners for their care, instead of licensed medical doctors. The result was lower quality health care.

7. For the ACA model to be viable, everyone had to be insured, even those who were in good health, or pay a tax penalty. This makes sense from an insurance perspective, but it is anathema to the American way of life, which maintains that personal health decisions should be your own.

If Obamacare is repealed, then how can we still provide health care in the US? The problem is not the politics, but the health care system itself. Here is how I would do it.

1. The most important step is to lower the actual cost of health care. The US spends almost 20% of its GNP on health care because our insurance pays whatever is asked, even though we don't know what the real cost is! This is twice as much per person as every other first-world country--whose medical services are at least as good as those in the US!

How do we contain these costs? First, I would limit the amount that can be charged for medications, for physician services, for tests and for hospitalizations. Every other country does this, and we do it here, too, for Medicare. This is extremely unpopular with lawmakers, who rely on big money from the pharmaceutical, insurance, and hospital industry lobbies. We need lawmakers who are not influenced by lobbyists.

Next, we should have open, transparent pricing for all drugs and services, so people can comparison shop, perhaps even offering rebates if they save money for their payor.

Third, I would make medical training be tuition-free, and expand the number of physicians and residency positions, thereby increasing the supply of doctors who are comfortable working for lower salaries. Most doctors-to-be are not in it for the money, but find their school debts are so high that they have no choice but take high-paying jobs, forgoing work in a rural or inner-city practice.

Fourth, I would get rid of the middleman. Health care has become a profit center to generate income when money changes hands, or to provide jobs for themselves based on hospital administration and health-related government bureaucracies, many of which are not necessary in order to provide medical care.

2.  Medical insurance should not be tethered to employment, which automatically excludes those who are unable to work or cannot get a job--the ones who need health care the most! No other country in the world does it this way.

3.  There should be other options for health care coverage in addition to risk pool-based insurance. If health care costs are low enough, many may choose to pay for what they need and use, adding catastrophic coverage only. Insurance costs will become low enough that many will choose to participate; others may join cooperative organizations in which people pool resources and pay out as needed. Still others may choose to pay outright for their medical care. These options will make health care a truly free market.

4.  My personal preference is to develop a universal, free health care system, as you find in most other first world countries. One way to do this is to start with Medicare. Currently, all you need is a social security number and be age 65 or older to qualify for basic hospital services (Medicare A), while you can must pay for outpatient care insurance on a progressive, income-based schedule (Medicare B). Supplemental insurance is required to pay for medications and additional services. Medicare tightly regulates how much can be charged for clinical services (but not medication). Why not offer this to Americans of all ages? Many states are considering offering a single-payor solution, but it is unlikely to be viable for our entire country unless other cost reforms are in place--see point #1, above. And for many it may just be too "socialist" to be acceptable.

It is surprising that we have yet to come to terms with the recognition that we all pay for everyone else's health care in some way or another. From the underinsured who use emergency rooms as their primary care doctor, to the uninsured brought to the ED after an auto accident, to the indigent or the mentally ill who are in dire straits, to the elderly person who does not have the resources to pay for medications and is hospitalized--eventually this comes out of our tax money, or indirectly out of our own costs for medical care. Even healthy young adults who never need to see a doctor and eschew buying medical insurance will eventually grow old and die, too, and somehow we have to pay for their care as well. We might as well own up to this and make health care a reality for all.

Adapted from my April 1, 2017 post on Ask-An-Oncologist.com

Posted by Carol A Westbrook at 12:25 AM | Permalink | Comments (0)


Monday, April 24, 2017


Let Them Make My Cake: Exporting Burden, Importing Convenience in the Externalization Society

by Jalees Rehman

On 5 November 2015, an iron ore tailings dam burst in Bento Rodrigues near the Brazilian city of Mariana, releasing 60 million cubic meters of a reddish-brown mud-flood. This toxic flood buried neighboring villages and flowed into the Rio Doce, contaminating the river with several hazardous metals including mercury, arsenic and chromium as well as potentially harmful bacteria. The devastating and perhaps irreparable damage to the ecosystem and human health caused by this incident are the reason why it is seen as one of the biggest environmental disasters in the history of Brazil. The German sociologist Stephan Lessenich uses this catastrophe as a starting point to introduce the concept of the Externalisierungsgesellschaft(externalization society) in his book Neben uns die Sintflut: Die Externalisierungsgesellschaft und ihr Preis ("Around us, the deluge: The externalization society and its cost"). Trash

What is the externalization society? Lessenich uses this expression to describe how developed countries such as the United States, Japan and Germany transfer or externalize risks and burdens to developing countries in South America, Africa and Asia. The Bento Rodrigues disaster is an example of the environmental risk that is externalized. Extracting metals that are predominantly used by technology-hungry consumers in developed countries invariably generates toxic waste which poses a great risk for the indigenous population of many developing countries. The externalized environmental risks are not limited to those associated with mining raw materials. The developed world is increasingly exporting its trash into the third world.

The US, for example, are the world's largest exporter of paper trash, exporting scrap paper worth US$ 3.1 billion each year. The US is also the largest producer of electronic waste (E-Waste), estimated at more than 7 million tons of E-Waste per year (PDF). Every new smartphone or tablet release generates mountains of E-Waste. One would hope that these can be recycled but true recycling and re-using of electronic components is quite costly and time-consuming and it is often not clear which electronics actually get recycled. To track the fate of electronics, Jim Puckett from the Basel Action Network and his colleagues placed GPS-tracking devices in old electronics that were dropped off at US-based recycling centers. They found that a third of the "recycled" electronics were shipped overseas to countries such as Mexico, Taiwan, China, Pakistan, Thailand and Kenya. Puckett used the GPS signal to identify the sites where the E-Waste ended up and visited such a location in Hong Kong, where found that the "recycled" electronics were being dismantled in junkyards by migrant workers from mainland China who were not wearing any protective clothing to protect them from the hazardous materials released during the extraction of salvageable E-Waste materials. There are many regulations that restrict the trading of E-Waste but the United Nations Environment Program (UNEP) estimates that up to 90% of the world's E-Waste is traded or dumped illegally.

Exporting environmental risks to developing countries by either outsourcing high-risk extraction of raw materials or simply dumping hazardous waste is just one example of externalization according the Lessenich. Health risks and poverty are also outsourced. Lessenich's concept of the externalization society isn't just another critique of the global inequality that we so often hear about. The fundamental principle of the externalization society is the interdependence between the "imperial lifestyle" of wealth and comfort in the developed world and the "wretched lifestyle" of poverty and hardship in the developing world. If those of us who live in the developed world want the convenience of upgrading our smartphones every few years or buying cheap cotton t-shirts, then we need those who manufacture these products in the developing world to be paid lousy wages. If those workers were paid humane wages and their employers instituted health insurance or pension plans then the cost of the products would be incompatible with our current economy and lifestyle which are fueled by consumerism and the capitalist imperative of incessant growth.

The pillars of the externalization society are indifference and ignorance. We are indifferent because we see the differential in lifestyle as a Selbstverständlichkeit – a German word for obviousness or taken-for-grantedness. They were born in developing countries, so of course they have to struggle – tough luck, they ended up with the wrong lottery tickets. This Selbstverständlichkeit also extends to the limited mobility of the people born in the developing world. They lack the birthright of the developing world citizens whose passports allow them to either travel visa-free or obtain a visa to nearly any country in the world with minimal effort. This veneer of Selbstverständlichkeit is easiest to maintain if "they" and "their" problems are invisible and thus allow us to ignore the interdependence between our good fortune and their misery. We might see images of the toxic flood in Brazil but few, if any, members of the externalization society will link the mining of cheap iron in Brazil to the utensils they use in their everyday life.

A decade ago, disposable single-use coffee pods such as the Keurig K-cups or the Nespresso pods were extremely rare but by 2014, K-cup manufactures sold a mind-boggling 9 billion K-cups! A new need for disposable products that had previously been met by standard coffee machines arose without considering the environmental and global impact of this need. In theory, the K-cups are recyclable but this would require careful separating of the paper, plastic and the aluminum top. It is not clear how many K-cups are properly recycled, and the E-Waste example shows that even if items are transported to recycling centers, that does not necessarily mean that they will be successfully recycled. Prior to the advent coffee pods, our coffee demands had been easily met without generating additional mountains of disposable plastic and aluminum coffee pod trash. Out of nowhere, there arose a new need for aluminum which again is extracted from the aluminum ore bauxite – another process that generates toxic waste. Instead of feeling a sense of absolution when we drop a disposable item into a recycling bin, we should simply curtail unnecessary consumption of products in disposable containers.

How do we overcome the externalization society? We can make concerted efforts through advocacy, education and regulations that restrict exporting environmental waste, improve health and safety conditions for workers in the developing world and try to restrict our consumerist excesses by clarifying the interdependence between wealth in the externalization society and the poverty in the developing world as well as the moral imperative to abrogate the inequality and asymmetry. Numerous advocates have already attempted this approach for the past decades with limited success. Maybe instead of appealing to the ethics of interdependence, a more effective approach may be to educate each other about the consequences of the interdependence. When millions of refugees show up at the doorstep of the externalization society, "they" are no longer invisible. One can blame wars, religious extremism and political ideologies for the misery of the refugees but it becomes harder to ignore the extent and central role of the underlying inequality. Creating humane working and living conditions for people in the developing world is perhaps the most effective way to stop the so-called "refugee crisis".

Global climate change is another threat to the externalization society, a threat of its own making. Transferring carbon footprints and pollution to other countries does not change the fact that the whole planet is suffering from the consequences of climate change. Political leaders of the externalization society often demand the closing of borders, erecting walls and expanding their armed forces so they are less likely to have to confront the victims of their externalization but no army or wall is strong enough to lower the rising water levels or stabilize the climate. The externalization society will end not because of a crisis of conscience but because its excesses are undermining its own existence.

Reference

Lessenich, S. (2016). Neben uns die Sintflut: Die Externalisierungsgesellschaft und ihr Preis. Hanser Berlin. 

Posted by Jalees Rehman at 12:25 AM | Permalink | Comments (0)


Monday, February 27, 2017


Politics Trump Healthcare Information: News Coverage of the Affordable Care Act

by Jalees Rehman

6a017c344e8898970b01bb097d5b09970d-320wiThe Affordable Care Act, also known as the "Patient Protection and Affordable Care Act", "Obamacare" or the ACA, is a comprehensive healthcare reform law enacted in March 2010 which profoundly changed healthcare in the United States. This reform allowed millions of previously uninsured Americans to gain health insurance by establishing several new measures, including the expansion of the federal Medicaid health insurance coverage program, introducing the rule that patients with pre-existing illnesses could no longer be rejected or overcharged by health insurance companies, and by allowing dependents to remain on their parents' health insurance plan until the age of 26. The widespread increase in health insurance coverage – especially for vulnerable Americans who were unemployed, underemployed or worked for employers that did not provide health insurance benefits – was also accompanied by new regulations targeting the healthcare system itself. Healthcare providers and hospitals were provided with financial incentives to introduce electronic medical records and healthcare quality metrics.

As someone who grew up in Germany where health insurance coverage is guaranteed for everyone, I assumed that over time, the vast majority of Americans would appreciate the benefits of universal coverage. One no longer has to fear financial bankruptcy as a consequence of a major illness and a government-back health insurance also provides for peace of mind when changing jobs. Instead of accepting employment primarily because it offers health benefits, one can instead choose a job based on the nature of the work. But I was surprised to see the profound antipathy towards this new law, especially among Americans who identified themselves as conservatives or Republicans, even if they were potential beneficiaries of the reform. Was the hatred of progressive-liberal views, the Democrats and President Obama who had passed the ACA so intense among Republicans that they were willing to relinquish the benefits of universal health coverage for the sake of their political ideology? Or were they simply not aware of the actual content of the law and opposed it simply for political reasons?

A recent study published by a team of researchers led by Sarah Gollust at the University of Minnesota may shed some light on this question. Gollust and her colleagues analyzed 1,569 local evening television news stories related to the ACA that were aired in the United States during the early months of when the health care reform was rolled out (between October 1, 2013, and April 19, 2014). They focused on analyzing local television news broadcasts because these continue to constitute the primary source of news for Americans, especially for those who are age 50 and higher. A Pew survey recently showed that 57% of all U.S. adults rely on television for their news, and among this group, local TV new (46%) is a more common source than cable news (31%) or network news (30%).

Gollust and colleagues found that 55% of the news stories either focused on the politics of the ACA such as political disagreements over its implementation (26.5%) or combined information regarding its politics with information on how it would affect healthcare insurance options (28.6%). Only 45% of the news stories focused exclusively on the healthcare insurance options provided by the law. The politics-focused news stories were also more likely to refer to the law as "Obamacare" whereas healthcare insurance focused news segments used the official name "Affordable Care Act" or "ACA". Surprisingly, the expansion of Medicaid, which was one of the cornerstones of the ACA because it would increase the potential access to health insurance for millions of Americans, was often ignored. Only 7.4% of news stories mentioned Medicaid at all, and only 5% had a Medicaid focus.

What were the sources of information used for the news stories? President Obama was cited in nearly 40% of the stories, whereas other sources included White House staff or other federal executive agencies (28.7%), Republican (22.3%) or Democratic (15.9%) politicians and officials. Researchers, academics or members of think tanks and foundations were cited in only 3.9% of the news stories about the ACA even though they could have provided important scholarly insights about the ACA and its consequences for individual healthcare as well as the healthcare system in general.

The study by Gollust and colleagues has its limitations. It did not analyze TV network news, cable news, or online news outlets which have significantly gained in importance as news sources during the past decade. The researchers also did not analyze news stories aired after April 2014 which may have been a better reflection of initial experiences of previously uninsured individuals who signed up for health insurance through the mechanisms provided by the ACA. Despite these limitations, the study suggests that one major reason for the strong opposition among Republicans against the ACA may have been the fact that it was often framed in a political context and understated the profound effects that the ACA had on access to healthcare and the reform of the healthcare system itself.

During the 2016 election campaign, many Republican politicians used the idea of "repealing" the ACA to energize their voters, without necessarily clarifying what exactly they wanted to repeal. Should all the aspects of the ACA – from the Medicaid expansion to the new healthcare quality metrics in hospitals –be repealed? If voters relied on the local television news to learn about the ACA, and if this coverage – as is suggested by Gollust's study – viewed the ACA predominantly as a political entity, then it is not surprising that voters failed to demand nuanced views from politicians who vowed to repeal the law. The research also highlights the important role that television reporting plays in framing the debate about healthcare reform. By emphasizing the actual content of the healthcare reform and its medical implications and by using more scholars instead of politicians as information sources, these media outlets could educate the public about the law.

There are many legitimate debates about the pros and cons of the healthcare reform that are not rooted in politics. For example, electronic medical records allow healthcare providers to easily monitor the results of laboratory tests and avoid wasting patient's time and money on unnecessary tests that may have been ordered by another provider. However, physicians who are continuously staring at their screens to scroll through test results may not be able to form the interpersonal bond that is critical for a patient-doctor relationship. One could consider modifying the requirements and developing better record-keeping measures to ensure a balance between adequate documentation and sufficient face-to-face doctor-patient time. The ACA's desire to track quality of healthcare delivery and penalize hospitals or providers who deliver suboptimal care could significantly improve adherence to guidelines based on sound science. On the other hand, one cannot demand robot-like adherence to guidelines, especially when treating severely ill, complex patients who require highly individualized care. These content-driven discussions are more productive than wholesale political endorsements or rejections of the healthcare reform.

Healthcare will always be a political issue but all of us – engaged citizens, patients, healthcare providers or journalists - need to do our part to ensure that this debates about this issue which directly impacts millions of lives are primarily driven by objective information and not by political ideologies.

Reference:

Gollust, S. E., Baum, L. M., Niederdeppe, J., Barry, C. L., & Fowler, E. F. (2017). Local Television News Coverage of the Affordable Care Act: Emphasizing Politics Over Consumer Information. American Journal of Public Health, (published online Feb 16, 2017).

Posted by Jalees Rehman at 12:40 AM | Permalink | Comments (0)

Older Posts »