Wednesday, August 31, 2016
SEAMUS HEANEY ON WILLIAM WORDSWORTH’S ONE BIG TRUTH
Seamus Heaney died three years ago. But not before he penned this.
From Literary Hub:
As a child, William Wordsworth imagined he heard the moorlands breathing down his neck; he rowed in panic when he thought a cliff was pursuing him across moonlit water; and once, when he found himself on the hills east of Penrith Beacon, beside a gibbet where a murderer had been executed, the place and its associations were enough to send him fleeing in terror to the beacon summit.
Every childhood has its share of such uncanny moments. Nowadays, however, it is easy to underestimate the originality and confidence of a writer who came to consciousness in the far from child-centred eighteenth century and then managed to force a way through its literary conventions and its established modes of understanding: by intuition and introspection he recognized that such moments were not only the foundation of his sensibility, but the clue to his fulfilled identity.
By his late twenties, Wordsworth knew this one big truth, and during the next ten years he kept developing its implications with intense excitement, industry and purpose. During this period, he also elaborated a personal idiom: “nature” and “imagination” are not words that belong exclusively to Wordsworth, yet they keep coming up when we consider his achievement, which is the largest and most securely founded in the canon of native English poetry since Milton. He is an indispensable figure in the evolution of modern writing, a finder and keeper of the self-as-subject, a theorist and apologist whose Preface to Lyrical Ballads (1802) remains definitive.
How the simple definition of a hydrogen bond gives us a glimpse into the heart of chemistry
Ashutosh Jogalekar in The Curious Wavefunction:
A few years ago, a committee organized by the International Union of Pure and Applied Chemistry (IUPAC) - the international body of chemists that defines chemical terms and guides the lexicon of the field - met to debate and finalize the precise definition of a hydrogen bond.
Defining a hydrogen bond in the year 2011? Hydrogen bonds have been known for at least seventy years now. It was in the 1940s that Linus Pauling defined them as foundational elements of protein structure; the glue that holds molecules including life-giving biological molecules like proteins and DNA together. Water molecules form hydrogen bonds with each other, and this feature accounts for water's unique properties. Whether it's sculpting the shape and form of DNA, governing the properties of materials or orchestrating the delicate dance of biochemistry performed by enzymes, these interactions are essential. Simply put, without hydrogen bonds, not just modern civilization but life itself would cease to exist. No wonder that they have been extensively studied in hundreds of thousands of molecular structures since Pauling highlighted them in the 1940s. Today no chemistry textbook would be legitimate without a section on hydrogen bonding. The concept of a hydrogen bond has become as familiar and important to a chemist as the concept of an electromagnetic wave or pressure is to a physicist.
What the devil, then, were chemists doing defining them in 2011?
The Co-Founder of n+1 Is ‘Against Everything’
Daphne Merkin in the New York Times:
We live in singularly unsubtle times, when presidential candidates shout invective instead of delivering talking points and Twitter posts privilege catchiness over nuance. Then again, ours has never been a culture to value the reflective life — unlike in France, say, where public intellectuals hold political positions, or England, where Oxbridge dons form an aristocracy of the mind. Except for a brief period during the last century, from the 1930s through the 1960s or so, when an active intelligentsia (even the word sounds dated) loosely known as the New York Intellectuals formed around a clutch of publications including Partisan Review, The Nation and Commentary, and critics like Lionel Trilling, Dwight Macdonald and Mary McCarthy had a say on matters literary and political, we tend to give short shrift to intellection for its own sake, regarding it as something best corralled off in the academy.
And indeed, for the last 20 years, instead of thinkers, we have seen the rise of pundits, those ubiquitous opiners on the news of the day who take the short view of necessity. This trend has been bucked by a handful of serious-minded magazines with a spectacularly small readership and by the occasional erudite voice in newspapers like this one. Sensing a gap in the discourse, a group of young, mostly Harvard-educated writers started a publication called n+1 in 2004, which attempted to fill the void where Partisan Review and the like had once engaged in “the life of significant contention,” as Diana Trilling put it. Which brings us, happily, to the occasion of “Against Everything,” a new collection of essays by Mark Greif, an editor at n+1 (where most of these pieces first appeared) and a frequent contributor since its inception on widely disparate themes.
Joseph Stiglitz: It’s Time to Get Radical on Inequality
happy death poem
Donald Revell has mastered a poetic genre few poets even attempt: the happy poem. That’s not to say that his poetry doesn’t grapple with darkness—it does, and deeply. This poem is called “Death,” after all, and Revell tries as hard as he can in this small space to meet mortality head-on. One of Revell’s possible goals is to engender a sense of awe: in his poems, life is fundamentally amazing, even though—even because—it has an ending. Poets write poems for many reasons, chief among them to express feelings, to articulate the vagaries and fine points of an emotional state. Poets also write to create emotional states in readers, and this Revell poem invites readers to accept death. Without ever forgetting the mortal stakes of every moment, Revell manages to sing joyfully, no matter his subject. He knows deeply what the words have always been telling him: that all our terrors, such as “space and time,” are “inventions / Of sorrowing men”; in this poem, he chooses not to be one.
As a celebratory poet, Revell is in good company: Shakespeare, Donne, Blake, Herbert,Dickinson, and Whitman come to mind as voices playing in the background of “Death.” All these poets revel—a pun on Revell’s name that he seems to have taken seriously—in details and in the capacity of the imagination to elevate them toward a kind of holiness. Of course, many of these poets also had a particular kind of holiness in mind, as does Revell; when he (or the others) uses the word soul, he means it in the Christian sense: the immortal soul that will live eternally in heaven.
MUSIC AS A VOLATILE ART FORM
For a while now I’ve had a theory about a select group of artists who were making music in the 1960s and ’70s. These are musicians who seem related to their time only obliquely: they may have been marked by it, but they were not of it. Other artists’ greatness might lie in their perfectly embodying certain musical directions of the day—the Beatles, for example. These musicians, on the other hand, have inherent greatness; that it might have been expressed in the language of their day is instructive, but ultimately incidental—they were tapping a deeper vein.
Each Weirdo works, if not within the confines of, then at least alongside a given genre. Thus you’ve got blues and jazz Weirdos (Captain Beefheart, Frank Zappa); country Weirdos (Leon Russell, Lee Hazlewood); and pop Weirdos (Randy Newman, Harry Nilsson). There’s no getting around the fact that these are all white men. In emphasizing their whiteness alongside their weirdness, I want to point out a certain self-awareness on their part, particularly when it comes to the use of rock, jazz, and blues—musical forms developed by black musicians.
Nicholson Baker, Substitute Teacher
Baker is often frustrated with the material he’s asked to push on students, and this reaches its peak with a graphic Holocaust documentary called Auschwitz: Death Camp starring Oprah and Elie Wiesel that he shows to successive 10th-grade English classes. Watching piles of bodies, Baker thinks,
I knew that this was the wrong documentary to be showing to a group of choiceless, voiceless high school kids at eight-thirty on a Monday morning, in connection with a compare-and-contrast media-studies assignment … These high schoolers were being tortured to the point of numbness and indifference by gruesome imagery—and the Holocaust was being trivialized through inattention, both at the same time. Why was this happening? Why was I a part of this?
By the end of the video two girls are doing a cheerleader-style H-O-L-O-C-A-U-S-T! chant, and Baker doesn’t know whom to blame.
Classroom technology has changed a lot since Baker last visited—even since I did—and having a substitute isn’t the break for students it used to be if their daily progress is watched by a school iPad. Some teachers take a curatorial approach, delegating a large portion of their pedagogy to instructional apps, Youtube videos, and downloaded worksheets. On day nine Baker hears two teachers discussing iPad assignments when they have subs, “You delete them without reading them?” one asks. “Yes,” the other says, “They don’t do anything anyway.”
A literary guide to hating Barack Obama
Carlos Lozada in The Washington Post:
Secret Muslim. Socialist. Amateur. Anti-American. Criminal.
Throughout the presidency of Barack Obama, and even before it, a chorus of writers has stood stage right, reinterpreting the era but mainly eviscerating the man. Obama, initially little known, became a literary subgenre and publishing obsession, with countless volumes attacking the president, promising to unmask who he really is, what he really thinks and why he does the things he does. And for a while, at least, the books sold well. Selecting a representative set among dozens and dozens of titles in the Obama hatred literature is not easy. Do you go with “Impeachable Offenses” or “The Manchurian President”? “Divider-in-Chief” or “The Obama Nation”? “Culture of Corruption” or “The Roots of Obama’s Rage”? A sample of such books, spanning 2008 to 2016, shows that, while the anti-Obama canon can be predictable, it is by no means static. The aversion to the president is always growing, and the nature of that aversion is always evolving toward harsher conclusions. In the beginning, there was ignorance, and the void of our Obama knowledge was filled with speculation, bits of autobiography and family lore. The senator from Illinois was deemed dangerous for all that he might be: distant, unfamiliar, foreign in so many ways. Once he sat in the Oval Office, however, the attacks shifted, and the president became that most recognizable of political creatures: unprincipled, corrupt, Chicago. As conservative disdain intensified throughout his first term, Obama came to be seen as a bungler, in over his head (think the Libya intervention or Operation Fast and Furious). Yet soon he was redefined once more, this time as a brilliant subversive: It’s not that Obama doesn’t know what he’s doing but that he knows all too well. That leads, inevitably, to the final and most damning judgment — that this president is a criminal.
Donald Trump’s rise in GOP presidential politics has drawn sustenance and inspiration from the anti-Obama literature, regardless of whether its authors support the candidate. Indeed, the arc of Trump’s criticisms of the president, from his birtherism in 2011 to his more recent charge that Obama is “the founder of ISIS,” traces, in a distorted and exaggerated way, these portrayals of the president, from unknown outsider to recidivist lawbreaker. These books and writers do not necessarily agree with one another. But they do build upon each other. And if the 2016 Republican presidential nominee has succeeded in tapping into right-wing anger, it is an anger that has been chronicled, reflected and stoked by the anti-Obama literary canon.
How DNA could store all the world’s data
Andy Extance in Nature:
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. “We thought, 'What's to stop us using DNA to store information?'” Then the laughter stopped. “It was a lightbulb moment,” says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. True, DNA storage would be pathetically slow compared with the microsecond timescales for reading or writing bits in a silicon memory chip. It would take hours to encode data by synthesizing DNA strings with a specific pattern of bases, and still more hours to recover that information using a sequencing machine. But with DNA, a whole human genome fits into a cell that is invisible to the naked eye. For sheer density of information storage, DNA could be orders of magnitude beyond silicon — perfect for long-term archiving.
“We sat down in the bar with napkins and biros,” says Goldman, and started scribbling ideas: “What would you have to do to make that work?” The researchers' biggest worry was that DNA synthesis and sequencing made mistakes as often as 1 in every 100 nucleotides. This would render large-scale data storage hopelessly unreliable — unless they could find a workable error-correction scheme. Could they encode bits into base pairs in a way that would allow them to detect and undo the mistakes? “Within the course of an evening,” says Goldman, “we knew that you could.” He and his EBI colleague Ewan Birney took the idea back to their labs, and two years later announced that they had successfully used DNA to encode five files, including Shakespeare's sonnets and a snippet of Martin Luther King's 'I have a dream' speech1. By then, biologist George Church and his team at Harvard University in Cambridge, Massachusetts, had unveiled an independent demonstration of DNA encoding2. But at 739 kilobases (kB), the EBI files comprised the largest DNA archive ever produced — until July 2016, when researchers from Microsoft and the University of Washington claimed a leap to 200 megabytes (MB).
Mornings at Seven
Wild geese stir in the early morning calm
with the ripple of their wake.
near the shore’s arm of dune that holds the pond,
a kayak glides,
someone seeking peace
and looking up to find it in the sky.
A sudden commotion of the water at my shore!
Two swimmers diving in together
side by side exactly.
Man and woman—
I can see the sickle-splash of arms and legs in ardent crawl,
and the watery tumult of pumping feet.
But more, and
is a joyous energy of purpose in the two of them,
And a determination to be swimming side by side,
so that in coming up for air, their eyes can meet.
The seriousness of their purpose shouts to heaven,
and gives this pond and sky
a grounding and a glory,
announcing that their heading out, together, side by side,
is no more the single purpose of their beings,
then is the night of sleeping side by side.
And they have found that that’s the simple whole if it.
by Peggy Freydberg
from Poems from the Pond
Hybrid Nation, 2015
Tuesday, August 30, 2016
Eric Loomis in the Boston Review:
At least since the passage of California’s Proposition 13 in 1978—in which property owners voted to halve their property taxes—the United States has struggled with an anti-tax mentality revolving around the belief that government is ineffective. That sentiment is nowhere so clearly expressed as in wingnut Grover Norquist’s famous dictum that government should be small enough to drown in a bathtub. Indeed, the right’s efforts to starve government of the level of resources necessary for competent functioning have made a self-fulfilling prophecy of the claim that government is moribund.
Daniel L. Hatcher’s The Poverty Industry exposes one way that states have responded to the anti-tax climate and diminishing federal funds. Facing budget crises but reluctant to raise taxes, many state politicians treat federal dollars available for poverty-relief programs as an easy mark from which they can mine revenue without political consequence. They divert federal funding earmarked for social programs for children and the elderly, repurposing it for their general funds with the help of private companies that in effect launder money for them. A law professor at the University of Baltimore who has represented Maryland victims of such schemes, Hatcher presents a distressing picture of how states routinely defraud taxpayers of millions of federal dollars.
This is possible because there is a near-total absence of accountability for how states use federal money intended to fight poverty. Remarkably, states do not even have to pretend to have used all the funds for the stated purpose; they are only required to show that they are taking care of the populations for which the funds were intended.
The Simple, Elegant Algorithm That Makes Google Maps Possible
Michael Byrne in Motherboard:
Algorithms are a science of cleverness. A natural manifestation of logical reasoning—mathematical induction, in particular—a good algorithm is like a fleeting, damning snapshot into the very soul of a problem. A jungle of properties and relationships becomes a simple recurrence relation, a single-line recursive step producing boundless chaos and complexity. And to see through deep complexity, it takes cleverness.
It was the programming pioneer Edsger W. Dijkstra that really figured this out, and his namesake algorithm remains one of the cleverest things in computer science. A relentless advocate of simplicity and elegance in mathematics, he more or less believed that every complicated problem had an accessible ground floor, a way in, and math was a tool to find it and exploit it.
In 1956, Dijkstra was working on the ARMAC, a parallel computing machine based at the Netherlands’ Mathematical Center. It was a successor to the ARRA and ARRA II machines, which had been essentially the country’s first computers. His job was programming the thing, and once ARMAC was ready for its first public unveiling—after two years of concerted effort—Dijkstra needed a problem to solve.
“For a demonstration for noncomputing people you have to have a problem statement that non-mathematicians can understand,” Dijkstra recalled in an interviewnot long before his 2002 death. “They even have to understand the answer. So I designed a program that would find the shortest route between two cities in the Netherlands, using a somewhat reduced road-map of the Netherlands, on which I had selected 64 cities.”
Nicole Eisenman and the Resurrection of Figuration
Morgan Meis in The Easel:
The contemporary painter Nicole Eisenman tells a rather moving story about winning a MacArthur “genius” grant in the late summer of 2015. She went to a quiet place and wept. Similar experiences have, no doubt, beset many MacArthur recipients. The grant is a crowning glory to an artist’s career, conveying recognition at the highest level along with no small amount of legal tender ($625,000 as of last year). You too would probably cry.
It should also be said that, for Eisenman, the tears were related to art, and to painting in particular. That’s because Eisenman has, for many years now, been making paintings that you wouldn’t necessarily expect to meet the favor of critics, curators, and academics. Since those are the sorts of folk who act as judges at the MacArthur Foundation, it seemed a safe bet that Nicole Eisenman wasn’t going to be in the running. Why is this? Mostly, it is because Eisenman adopts a cartoony painting style and a light, joking attitude on many of her canvases (though by no means all). Take, for instance, a painting called The Session, from 2008.
Stylistically, the painting verges on being a panel from a cartoon strip. A figure resembling Eisenman herself reclines on a couch at her analyst’s office. She has dirty bare feet and a hole in her pants. She clutches desperately at a box of tissues as she weepingly shares tales of woe to her analyst, who jots down notes in a chair nearby. A vase near a bookcase at the left side of the painting is shaped like a phallus. It is a cute and gently self-mocking painting, but not obviously the stuff to put the contemporary art world on notice.
On second glance, however, even a relatively “light” painting like The Session is making a strong argument about what painting can and should be.
See real democracy in action – how Chinese third-graders elect a class monitor
What became of the Christian intellectuals?
The terms “nativism,” “reactionary,” even “fascism” appear in political conversation with increasing regularity. Though few of these leaders profess deep religious commitments, their popularity seems driven in significant part by religious ressentiment — an awareness of the decline of Christian (or “Judeo-Christian”) civilization and a determination to arrest and, if possible, reverse that decline.
Political liberals who long expected to live in an increasingly liberal world may find themselves disoriented by these manifestations, whose nature they are ill prepared to understand, and they certainly wish such “forces of reaction” would just go away. But these forces will not go away. If we were to wish for something less fantastic than the disappearance of our political opposites, we might think along these lines: It would be valuable to have at our disposal some figures equipped for the task of mediation — people who understand the impulses from which these troubling movements arise, who may themselves belong in some sense to the communities driving these movements but are also part of the liberal social order. They should be intellectuals who speak the language of other intellectuals, including the most purely secular, but they should also be fluent in the concepts and practices of faith. Their task would be that of the interpreter, the bridger of cultural gaps; of the mediator, maybe even the reconciler.
Half a century ago, such figures existed in America: serious Christian intellectuals who occupied a prominent place on the national stage. They are gone now. It would be worth our time to inquire why they disappeared, where they went, and whether — should such a thing be thought desirable — they might return.
Death and Doctors' Fears
First, the scary subject of euthanasia. To avoid any misunderstanding: euthanasia, as I am defining it, is the handing or administering of a fatal overdose to a patient by a doctor on the patient’s request. This includes Physician Assisted Suicide. We shall not here go into all the terms and conditions attached to such an act here in the Netherlands. Suffice it to say that it is quite a procedure and not something that is arranged overnight or on the whim of a patient or a doctor. In the United States, the administering of a lethal medication by a doctor is never allowed, but under certain conditions Physician Assisted Suicide is allowed in five states—Oregon, California, Washington, Maine, and New Mexico —and may be on its way to legal status in Vermont.
It is often said that it takes courage to perform euthanasia, and a colleague described to me the other day why he finds it so difficult: “It feels somehow as if the very foundation of my existence is being undermined. The thought of it causes an experience of vertigo. A request almost seems to set me dangling above an abyss.”
I find this a very convincing description, because that is precisely what we feel when faced with the possibility of a predetermined, explicitly arranged death. It is a fearful business, but I don’t quite understand what it is we are so afraid of. Being courageous means that you realize the danger of a situation.
Who Is Kim Jong-un?
The pudgy cheeks and flaring hairdo of North Korea’s young ruler Kim Jong-un, his bromance with tattooed and pierced former basketball star Dennis Rodman, his boy-on-a-lark grin at missile firings, combine incongruously with the regime’s pledge to drown its enemies in a “sea of fire.” They elicit a mix of revulsion and ridicule in the West. Many predict that the Democratic People’s Republic of Korea cannot survive much longer, given its pervasive poverty, genocidal prison camp system identified by a UN commission of inquiry as committing crimes against humanity,1 self-imposed economic isolation, confrontations with all of its neighbors, and its leader’s youth and inexperience. The Obama administration has adopted a position of “strategic patience,” waiting for intensifying international sanctions to force North Korea either to give up its nuclear weapons or to implode and be taken over by the pro-Western government of South Korea.
But North Korea’s other closest neighbors, the Chinese, have never expected the DPRK to surrender or collapse, and so far they have been correct. Instead of giving up its nuclear bomb and missile programs, Pyongyang is by now thought to have between ten and twenty nuclear devices and over one thousand short-, medium-, and long-range missiles, and to be developing a compact warhead that will be able to hit the US mainland.
Does Giftedness Matter?
Scott Barry Kaufman in Scientific American:
The thing is, the whole concept of giftedness was, from the very beginning of its inception, tied to educational outcomes. When Lewis Terman invented the concept*, he made giftedness synonymous with high IQ scores (on his own test, of course), and linked it to high achievement (genius). What seems to be going on here (and I document this trend in my book Ungifted), is that a sizable proportion of the gifted and talented community-- mostly clinicians who actually work with such children on a daily basis-- fundamentally conceptualize giftedness as something very different than high achievement, and often also very different from high cognitive ability. Now, don't get me wrong: I could get behind this newer conceptualization of giftedness. What this particular segment of the gifted and talented community seem to be describing as giftedness-- exquisite sensitivity to the environment-- certainly is a particular dimension of human variation that is important, and most certainly has substantial variation, like the rest of human personality differences.
But here's the thing: I think in order for this new conceptualization of giftedness to be tractable, it should have more clearly delineated properties, better measurement, and it should also be more clearly tied to particular educational interventions. What can you specifically do to support children who "experience the world intensely"? How do you identify that unique population in the first place, independent of IQ tests, academic achievement, and other very non-experiencing-oriented assessments? From a scientist's point of view, and even from a pragmatists point of view, I don't know what to do with this new definition of giftedness. How do you know what other people really feel, or how intensely they feel it? You know your own qualia, and that's it.
Monday, August 29, 2016
Quantitative Measures of Linguistic Diversity and Communication
by Hari Balasubramanian
Of the 7097 languages in the world, twenty-three (including the usual suspects: Mandarin, English, Spanish, various forms of Arabic, Hindi, Bengali, Portuguese) are spoken by half of the world's population. Hundreds of languages have only a handful of speakers and are disappearing quickly; one language dies every four months. Some parts of the world (dark green regions in the map) are linguistically far more diverse than others. Papua New Guinea, Cameroon, and India have hundreds of languages while in Japan, Iceland, Norway, and Cuba a single language dominates.
Why are languages distributed this way and why such large variations in diversity? These are hard questions to answer and I won't be dealing with them in this column. So many factors – conquest, empire, globalization, migration, trade necessities, privileged access that comes with adopting a dominant language, religion, administrative convenience, geography, the kind of neighbors one has – have had a role to play in determining the course of language history. Each region has its own story and it would be too hard to get into the details.
I also won't be discussing the merits and demerits of linguistic diversity. Personally, having grown up with five mutually unintelligible Indian languages, I am biased towards diversity – each language encapsulates a unique way of looking at the world and it seems (at least theoretically) that a multiplicity of worldviews is a good thing, worth preserving. But I am sure there are opposing arguments.
Instead, I'll restrict my focus to the following questions. How can the linguistic diversity of a particular region or country be numerically quantified? How do different parts of the world compare? How to account for the fact that languages may be related to one another, that individuals may speak multiple languages?
In tackling these questions, my primary source and guide is a short paper published in 1956 by Joseph Greenberg . Greenberg's main goal was to create objective measures that could, in the future, be used to "to correlate varying degrees of linguistic diversity with political, economic, geographic, historic, and other non-linguistic factors." His paper proceeds from the assumption that linguistic surveys have been conducted and data on what people consider their mother tongue/first language, the number of speakers of each language, vocabulary etc. are already available. Ethnologue is an example of such a global survey .
The Linguistic Diversity Index
The most basic measure Greenberg proposed is the now widely used linguistic diversity index. The index is a value between 0 and 1. The closer the value is to 1, the greater the diversity. The index is based in a simple idea. If I randomly sample two individuals from a population, what is the probability that they do not share the same mother tongue? If the population consisted of 2000 individuals and each individual spoke a different language as their mother tongue, then the linguistic diversity index would be 1. If they all shared the same mother tongue, then the index would be 0. If 1800 of them spoke language M and 200 of them spoke N, then index would be:
1 – (1800/2000)2 - (200/2000)2 = 0.18
In the above, (1800/2000) is the probability that a randomly picked individual speaks M as their first language/mother tongue. And (1800/2000)2 is the probability that two randomly picked individuals speak M. Similarly, (200/2000)2 is the probability that both the randomly picked individuals speak N as their mother tongue. When we subtract these squared terms from 1, what remains is the probability that the two randomly sampled individuals do not share a mother tongue. In this particular example, the index of 0.18 is low because of the dominance of M.
If there are more than two languages the procedure is the same. You would have one squared term that needs to be subtracted for every language. In a population of 10,000 where 10 languages are spoken and each language is considered a mother tongue by exactly 1000 speakers, the index would be:
1 – 10 x (1000/10,000)2 = 0.9.
This high value reflects both the number of languages and how evenly distributed they are in the population.
In fact, there are fifteen countries whose linguistic diversity exceeds 0.9, as the table above shows (based on Ethnologue data ). The list is dominated by 11 African countries, with Cameroon at number two. India, whose linguistic diversity I experienced firsthand for twenty years, is at number 13. Two Pacific island nations – Vanuatu and Solomon Islands: small islands these, and yet so many languages! – are in the top 5. First on the list is Papua New Guinea whose 4.1 million people speak a dizzying 840 languages! The country's index of 0.98 means that each language has about 5000 speakers on average and that no language dominates as a mother tongue.
In his book The World Until Yesterday, Jared Diamond, who did a lot of his fieldwork and research in New Guinea, has this startling anecdote:
"One evening, while I was spending a week at a mountain forest campsite with 20 New Guinea Highlanders, conversation around the campfire was going in several different local languages plus two lingua francas of Tok Pisin and Motu…. Among those 20 New Guineans, the smallest number of languages that anyone spoke was 5. Several men spoke from 8 to 12 languages, and the champion was a man who spoke 15. Except for English, which New Guineans often learn at school by studying books, everyone had acquired all of his other languages socially without books. Just to anticipate your likely question – yes, those local languages enumerated that evening really were mutually unintelligible languages, not mere dialects. Some were tonal like Chinese, others were non-tonal, and they belonged to several different language families."
How different from what the majority of us are used to!
While New Guinea's linguistic diversity is widely recognized and not in doubt, its high language count and the rampant multilingualism that Diamond observed nevertheless lead to us to two flaws in the linguistic diversity index.
The first flaw is that the index assumes languages are well defined, mutually exclusive units. It ignores the relatedness between languages and the fact that a dialect may be arbitrarily called a language. What of cases where there is close relatedness and even mutual intelligibility, for example between Hindi and Urdu, or between Spanish and Italian? And what to make of those cases where two dialects may well be closely related, but nevertheless are mutually unintelligible when spoken? Further, the language question seems loaded with the question of identity and politics. Apparently there is a running joke among linguists: "A language is a dialect backed by by an army and a navy."
To partially address this, Greenberg -- who recognized these problems, and was well aware of the difficulties of distilling complex language realities into quantitative measures -- suggested that the resemblance between languages or dialects could be numerically quantified by a value between 0 and 1. This what I understood from his paper: take the combined current vocabulary of a pair of languages and calculate the proportion of words that are common to both languages in relation to the total list of words. This proportion gives us a approximate measure of resemblance. A resemblance close to 1 means that the two languages are virtually identical, and a resemblance close to 0 implies an almost total lack of relatedness.
The resemblance can then be used to adjust the linguistic diversity index. Suppose there are three languages M, N and O spoken by 1/8th, 3/8th and 1/2 of the population and suppose the resemblance between [M, N], [M, O], and [N, O] is 0.85, 0.3 and 0.25. The unadjusted linguistic diversity index is 0.593. If we adjust for resemblance, this value drops to 0.381 -- diversity is not as high as it originally seemed. I have explained the calculations at the end of the piece .
The second flaw in the index is that, by considering only an individual's mother tongue, it ignores multilingualism. As Diamond's New Guinea anecdote shows, a high linguistic diversity does not necessarily represent a lack of communication. The examples of Indonesia, India and the many countries of Africa show that it is possible to communicate in some common languages, lingua francas that span large parts of the population, while yielding space to local mother tongues. So a different kind of measure is required.
Index of Communication
To accommodate multilingualism, Greenberg proposed the index of communication. As before, the index is a value between 0 and 1. A value close to 1 indicates high communicability and a value close to 0 indicates the opposite. If I randomly pick two individuals in a population, and each individual speaks one or more languages, then what is the probability that the individuals share at least one language in common? To ensure communicability, only one language has to overlap. (This index too has its problems. One flaw is that it ignores how well an individual speaks a particular language – something that might be hard to elicit in a survey. Another is how to set the threshold of communicability - is knowing a few basic words sufficient?)
Consider the simplest case where a population speaks only two languages, M and N. Using a census, you can calculate the proportion of the population that speaks M only, N only, and is bilingual in M and N. Suppose those proportions are 0.5 (speak M only), 0.3 (speak N only) and 0.2 (speak both M and N). To calculate the index of communication, I simply subtract the cases where the two individuals cannot understand/communicate with each other, which happens when the first individual speaks only M and the other only N, and vice-versa:
1 – [0.5 x 0.3] – [0.3 x 0.5] = 0.7
The same idea can be extended to more than two languages.
I'll try to illustrate the index with a personal example. The engineering college I attended in the south Indian city of Trichy had students from all parts of the country. At the time the college was called Regional Engineering College (REC), it is now called the National Institute of Technology. There was one REC in each major Indian state. The RECs had a unique admission policy. Half of the engineering students admitted each year were from the local state – in the case of Trichy, the home state was Tamil Nadu – and the remaining half were from outside the state. The more populous states, such as Uttar Pradesh and Bihar, got more students, but even far-flung parts, the Northeast and Kashmir, had some representation.
In my first year, all the 400 odd male engineering students were packed into the same hostel (dormitory), with 5 students sharing a room. In what seemed like a deliberate policy at integration, the students were assigned rooms so that 2-3 of the students were from Tamil Nadu and each of the others was from a different state. Since states in India are organized along linguistic lines, you had 3-4 mother tongues in each room. In the corridors you could hear the two dozen major languages of India .
Despite all this diversity, communication was never a problem. Among the North Indians almost everyone knew Hindi and so Hindi was the bridge between mother tongues. The local state students– they were colloquially called Tambis by the North Indians – spoke Tamil but did not understand Hindi and were even hostile to it (even today, the Indian prime minister Narendra Modi's emphasis on Hindi annoys my Tamil friends). But all students whether North Indian or Tamil, had some working knowledge of English – the language of the textbooks, which everyone aspired to speak well if only to get access to good jobs after graduation. So English – however grammatically inaccurate or spotty – was the bridge between the locals and the North Indians.
If I randomly sampled two individuals from that student population of 400, then there is a good chance that the two students would have different mother tongues (high linguistic diversity), but due to multilingualism they would have at least one language in common. So the index of communicability was essentially 1, if we ignore the question of proficiency.
My own case was somewhat different but by no means unique. Although I was born with Tamil as my mother tongue, I had lived mostly in West and Central India and had picked up Hindi, Gujarati and Marathi socially (the last two have dropped off due to lack of practice). I applied to college as an out-of-state student, but was really returning to my home state. In Trichy, I could communicate in Tamil with all the local students. Indeed, my colloquial command of Tamil – all the bad words included –went up! With everyone who was not from Tamil Nadu, I used mostly Hindi or English. I learned, to my surprise, that my ability in conversational English was poor, because I'd never really spoken it socially.
The college experience I've described applies more generally. Many parts of India are like this: different language communities live together in cities and along borders between states and multilingualism facilitates communication.
To summarize, Greenberg's two indices capture contrasting aspects of language reality in a population. The diversity index captures the number of mother tongues and how evenly represented they are in relation to each other, while the index of communication captures how connected a population is.
In theory, a population could retain its linguistic diversity while also maintaining a high index of communication essential in a globalized world. In practice however, a worldwide rise in communication appears to be happening at the expense of linguistic diversity, with hundreds of languages in Australia, North America, Central and South America losing ground quickly. Africa is the only continent bucking the trend. India's twenty odd major languages are still doing quite well, but many of its numerous other languages are not – check out these podcasts (1 and 2) by Padmaparna Ghosh and Samanth Subramanian on the challenges of linguistic surveys and inevitability of language loss.
Finally, here are brief notes on two different countries: Mexico and United States. I've had a long-standing interest in both these countries. Drawn to its pre-Columbian indigenous past, I traveled to Mexico six times – from Chiapas to Oaxaca in the south, to Michoacán and Mexico City in the center, to Chihuahua in the north. The United States, meanwhile, has been home for the last 16 years.
In the last section of his paper, Greenberg demonstrates how his two measures – linguistic diversity index and the index of communication – stack up when it comes to the 31 states of Mexico, and Mexico as a whole. To do this, he used bilingual data from a census in 1930. Like so many parts of the world, Mexico had hundreds of indigenous languages, which began to decline after the Spanish conquest of Mexico in 1521.
In Greenberg's calculation, Mexico's linguistic diversity index (unadjusted for resemblance) was 0.31 in 1930 while it's index of communication was 0.83. Among individual states, though, there was a great deal of variation. The federal district (DF – Distrito Federal), which includes the highly populous Mexico City had much lower linguistic diversity of 0.12 while its index of communication was 0.99 – virtually 1, which makes sense because Spanish is indispensable in the capital. The state of Oaxaca, which I have visited twice recently and where indigenous groups have a strong presence, had the highest linguistic diversity index of 0.83. In Greenberg's data, Oaxaca's index of communication of 0.47 was the lowest in Mexico.
But this was in 1930; I am sure things have changed in the last 86 years towards greater communicability and lower diversity as Spanish continues to be dominant. According to Ethnologue, Mexico's language count is 290 but its diversity index is down to 0.11. Most likely – this is a guess – its index of communication, which was already 0.83 in 1930, is well over 0.9 now.
According to the Ethnologue, the US has 430 languages: 219 of which are indigenous and 211 of them immigrant. North America before European settlement had hundreds of indigenous languages from different families. California was one of the most linguistically diverse places in the America with around 70-80 languages from as many as 20 language families.
Because of the sustained ethnic cleansing that happened after European arrival, the vast majority American Indian languages are now tethering on the brink of extinction. English is dominant, which explains the country's relatively low linguistic diversity of 0.34. English is also why the United States' index of communication is likely to be very high – above 0.9 if not close to 1 (this is a guess and is not based on data). Today an American Indian who speaks, say, Navajo or Cherokee, can communicate in English with a recently naturalized Indian-American whose original mother tongue was, say, Telugu.
Despite English's dominance, the United States does have a certain linguistic richness to it, thanks to immigrants (citizens or not) from all other continents to make a living here. By some estimates 800 languages are spoken in New York City!
Reference and Footnotes
1. Greenberg, Joseph H. "The measurement of linguistic diversity." Language 32.1 (1956): 109-115.
2. Lewis, M. Paul, Gary F. Simons, and Charles D. Fennig (eds.). 2016. Ethnologue: Languages of the World, Nineteenth edition. Dallas, Texas: SIL International. Online version: http://www.ethnologue.com.
3. Greenberg's adjustment for resemblance between languages: Suppose there are three languages M, N and O spoken by 1/8th, 3/8th and 1/2 of the population and suppose the resemblance between [M, N], [M, O], and [N, O] are 0.85, 0.3 and 0.25. Then the linguistic diversity index adjusted for resemblance is:
1 – [(1 x 1/8 x 1/8) – (1 x 3/8 x 3/8) – (1 x 1/2 x 1/2)]
– [(0.85 x 1/8 x 3/8) – (0.85 x 3/8 x 1/8)]
– (0.3 x 1/8 x 1/2) – (0.3 x 1/2 x 1/8)
– (0.25 x 3/8 x 1/2) – (0.25 x 1/2 x 3/8)
The first line is exactly the linguistic diversity index we have already seen, without adjusting for resemblance. There are 3 languages so one squared term for each language. Each term calculates the probabilities that both randomly picked individuals speak the same language. There is a multiplier of 1 since the resemblance of a language to itself is 1. If we used only the first line, we would get an unadjusted linguistic diversity index of 0.593.
The next 3 lines take care of relatedness between language pairs. The second line calculates the probability that the first randomly picked individual speaks M and the second speaks N, and vice versa. The multiplier of 0.85 indicates that there is a high resemblance, therefore speaking M and N should be treated (almost) like speaking the same language. Lines 3 and 4 do the same for language pairs [M, O] and [N, O] and the respective resemblance multipliers are used. In the end the adjusted diversity index gives us a value of 0.381, significantly lower than the unadjusted value of 0.593.
4. The beautiful Indian language tree illustration is by Minna Sundberg.
Wide Awake with Isabel Hull
by Holly A. Case
It was from Isabel Hull that I learned what tu quoque means, and how important it is to know. Hull is a professor of German history at Cornell, where I have also taught. Once I invited her to a class to talk about the British blockade of Germany during the First World War. She explained how the Germans had made war by invading neutral Belgium in 1914, knowing full well they were breaking international law. The title of her latest book, A Scrap of Paper (2014), alludes to the phrase that the German chancellor used to describe the international agreement governing Belgium's neutrality: it meant that little to him.
Hull described to my class the blockade's origins, what the Germans had thought and done, what the British were thinking, how they reached the decision to initiate the blockade, and what its likely impact was. But one concept stood out and remained a topic for discussion for the rest of the semester, even finding its way onto the final exam: it was the Latin phrase tu quoque. A literal translation of the phrase is "you also." Tu quoque is a rhetorical strategy whereby, instead of arguing directly against the claim of your opponent, you challenge their right to make an argument by charging them with hypocrisy. For example: the British government asserts that Germany violated international law by invading neutral Belgium and persecuting its inhabitants. The German government retorts that the British government itself is in breach of international law for having subsequently initiated a naval blockade against Germany, cutting off not only its supply of raw materials, but also (potentially) food to civilians.
The tu quoque is as old as the hills. Cicero used it to win a case in the trial of the exile Ligarius: "You are accusing one who has a case, as I say, better than your own." The Nazis were especially adept at deploying it. In 1942, the Nazi propaganda minister Joseph Goebbels confided to his diary: "The question of Jewish persecution in Europe is being given top news priority by the English and the Americans…We won't even discuss this theme publicly, but instead I gave orders to start an atrocity campaign against the English on their treatment of Colonials." There have been countless examples of tu quoque since. The Soviets countered American claims of human rights abuses with the phrase "And you are lynching negroes," which has its own entry on Wikipedia. Some Turkish scholars have used tu quoque to argue against claims that the Ottoman Empire instigated a genocide against the Armenians in 1915: "No nation is innocent. [T]hough the West has always accused the rest of the world of not being civilized enough, no other nations can be compared with the Germans, French, or Americans if we are talking about racism, fascism, and genocide."
In logic, the tu quoque is considered a fallacy, because it does not actually controvert the original statement. If anything, it confirms the moral valence of wrongdoing, declaring: Yes, I have done wrong, but so have you.
My personal favorite among Hull's books is titled Absolute Destruction, which lends a helpful aura of dead earnestness to any faculty office. Visitors' eyes invariably fall on the title: "Absolute Destruction?" they ask. "Yes," I reply, with deadly earnest glee.
Absolute Destruction shows with great clarity, precision, and, above all, evidence how the institutional culture of Imperial Germany's military leaked into its statecraft, with devastating effect. In the book and a related article, Hull argues that the German understanding of "military necessity" that emerged during wars in Europe and German Southwest Africa in the late nineteenth century—an understanding that had grown increasingly impervious to the influence of either politics or diplomacy—gave rise to the "final solutions" of the twentieth century.
The book that inspired Hull to become a historian was Konrad Heiden's Der Fuehrer: Hitler's Rise to Power (1944), which came into her hands at the age of twelve. It's a six-hundred-page, ultra-detailed history of Bavarian local politics during the Nazi takeover. Although she has never written on the Nazis directly, it doesn't take a very discerning reader to detect their shadow in the background of her work. She told me that what she remembers about Der Fuehrer is Heiden's description of "why a bunch of people would turn away from democracy," a possibility she had hitherto considered unthinkable.
I once ran into Hull in the mailroom, cursing at the copier. When I asked what she was working on, she told me she was reading for an article on Carl Schmitt, a twentieth-century German legal scholar whose work provided legal justification for the Nazis' suspension of the German constitution in 1933. Schmitt is frequently assigned in upper-level university courses; left-leaning scholars and students are drawn to his lucid critique of liberal hypocrisy. Yet I had noticed that whenever Schmitt's name came up at department events, my colleague reacted with unconcealed agitation. So when she told me she was writing about Schmitt, I was intrigued.
Schmitt suffered from a common malaise of many modern German intellectuals, she explained, who tended to reverse-engineer the premise of an argument from their desired outcome. They did not think and write in order to figure something out, but in order to justify something they either wanted to do or had already done. (A disturbingly fine example is Thomas Mann's Reflections of a Nonpolitical Man, first published in 1918. It's a retrospective intellectual/spiritual justification for Germany's involvement in the Great War, tacitly directed against Mann's own progressive brother Heinrich.)
Recently I read Hull's article on Schmitt, which focuses on the jurist's "pattern of argumentation." She writes that Schmitt was not a tu quoque man. Having recognized that the tactic did not serve Germany well at the postwar treaty negotiations, he favored another, much more radical mode of argumentation that went far beyond the aim of undermining the right of the accuser to judge the accused. His argument completely reversed the Allies' assertions that Germany was a megalomaniacal belligerent. It was not Germany, Schmitt insisted, but "Anglo-Saxonia" that had sought world domination with its "fake, universal international law." And it was not Germany, but the British who made "total war" with their blockade. In fact, the whole of international law was naught but a cover for Anglo-American imperialism. Norms themselves are always ideological, Schmitt concluded, "abstractions that obscure the facts of power."
Meanwhile, to retrospectively justify the Germans' invasion of neutral Belgium, Schmitt defined a "Notstand" (state of necessity). What made the Notstand exceptional was that it was not predicated on any rights possessed by others, nor on any duties or limitations on one's own comportment: it was unapologetically unilateral. Insofar as it took issue with the entire premise of the rights of others and espoused self-interest (realism) as the highest, indeed the only ideal in international relations, it was impervious to counter-arguments that appealed to fair play and international law.
A Scrap of Paper was published in 2014, at roughly the same time as Christopher Clark's Sleepwalkers: How Europe Went to War in 1914. It is difficult to imagine two more dissimilar works of scholarship. Whereas Hull argues German militaristic belligerence and deliberate disregard for international law led to the outbreak of the Great War, Clark does not assign blame, but rather focuses on the misperceptions of Europe's leading men—the "Sleepwalkers" of the title—in the months and weeks leading up to the war. Clark's work enjoys stellar ratings on Amazon, and has won a number of prizes and distinctions. One reviewer wrote of Sleepwalkers that it "deserves to become the new standard one-volume account" of the run-up to the Great War.
Although Hull hadn't read Clark's book when her own was published, in a sense, the "Prologue" of A Scrap of Paper offers a way of reading Sleepwalkers. The prologue is titled "What We Have Forgotten," and is about historians' complicity in effacing Germany's war guilt. She shows how, starting already in 1920, western journalists and scholars copy-pasted what the postwar German government—in its attempts to roll back reparations and undo the punitive Versailles treaties that ended the war—had fed them without probing to see what was left out or interrogating the bias of their sources. The result, she concludes, is a revisionist perception of the war very much like Clark's:
Faced with claims and counterclaims concerning violations of the laws of war, too many historians despair of getting to the bottom of things and making a reasonable judgment. Instead, they refuse to judge; they fall back on the tu quoque defense. That position generally rests on the unspoken (and rarely examined) premise that every violation was equal, that every decision of statesmen or military leaders to break the law was taken for the same reasons, or taken as easily or thoughtlessly, or was arrived at in the same way, following the same procedure, or was justified or explained to themselves or the world with the same arguments, or in the same language. In fact, all these things could, and often did, differ.
In other words, it was not the diplomats and statesmen of 1914 who were "sleepwalkers," but historians.
The last time I visited my colleague at her home, she said that her favorite among the things she's written is a short piece about the ideas of a late eighteenth-century German thinker, Adolph Freiherr von Knigge. In 1788, a year before the French Revolution, Knigge published a book titled Über den Umgang mit Menschen [On Intercourse with People]. Like many of the characters who appear in Hull's books, Knigge's thoughts have been distorted and obscured by both politics and posterity. Unlike most of those other characters, however, Hull clearly has a soft spot for Knigge.
Whereas Absolute Destruction and A Scrap of Paper read like expert exhumations of a mass grave with the object of identifying the perpetrators of a massacre, the piece on Knigge is more like an archaeological excavation of a long-lost treasure. Forensic skill and precision characterize all of Hull's writing, but in the Knigge piece—as in another of her early books, Sexuality, State, and Civil Society in Germany, 1700-1815—she shows us that there are good, smart people buried out there in the past.
She begins with a characterization of Knigge's philosophy as "change through willful individual action." But his was no libertarian manifesto. "It is important for anybody who wants to live in the world with people," Knigge insisted, "to adapt to the customs, tone, and mood of others." This injunction included one's enemies, whom one should treat with "benevolence, objectivity, understanding, [and] care." Above all: "Learn to countenance objection" [Lerne Widerspruch ertragen!]. Although On Intercourse with People was mistaken early on for a self-help book, and savaged by editors in subsequent editions to more closely resemble one, Hull notes that, "It does not lay down static rules of comportment, nor does it aim at cynical manipulation of others; rather it seeks to analyze why problems in social communication arise and how one might overcome them." Knigge's "first art" to living was "the art of making oneself understood, thus speaking and writing."
Reading Hull on Knigge is a melancholic enchantment. The Germans come off very badly in her last two books, not because she sees them as an ongoing menace to the world, but because she knows what treasures they destroyed and denied in their own thought in order to become the monsters of the first half of the twentieth century. There is an unmistakable love that emerges from contemplating Intercourse together with Absolute Destruction: "Let go of your desire to rule," wrote Knigge, "to play a brilliant main role." It is as if the poignant crime of Germany's most prominent modern thinkers, from Thomas Mann in Reflections, to Carl Schmitt, to Max Horkheimer, is that they tried to salvage German culture for humanity by defining it in opposition to liberalism. The tu quoque is a way of borrowing liberalism's mores to discredit liberalism, rather than to discredit the act of killing and power politics. Hull's oeuvre shows how German thinkers returned to this cynical reversal again and again, starting in the first half of the nineteenth century, when the liberals skewered and buried one of their own in Knigge. "Thus, liberalism itself destroyed one of the most remarkable sources of liberal thinking in German history."
A few weeks ago Hull told me how she sees Germany now: "It has really, really applied itself to its past, and is critical, insightful, morally scrupulous, and thoroughly admirable in the way that it has looked at itself. It's awake, and I'm filled with admiration for what they've done." As she sees it, today's menaces lie elsewhere, in the demagogic politics (Trump) and policies (drone warfare) of the United States, but also in the militarism and widely imitated authoritarianism of Russia. Just because some political systems and figures rely on the tu quoque instead of critically examining their own past and present policies does not exonerate us from critical self-examination. "Act independently!" exclaimed Knigge. "Do not deny your principles, […] in this way neither your social superiors nor inferiors will be able to withhold their respect."
Two of Knigge's principles were practicality and moderation. When I read this, I recalled another meeting with Hull, this time in her office. One of us was ill, or had been, so we started exchanging self-cures (none of which should be tried at home). There was my diluted hydrogen peroxide solution to address a lingering congestion, which left my olfactory nerves on permanent strike (I don't recall if it had any effect on the congestion). Hull then told me how she had cured herself of crippling fallen arches by forcing herself to walk miles a day in normal shoes all around hilly Ithaca. I countered with more hydrogen peroxide adventures, already feeling a bit like a one-trick pony. She then met me on my own pharmaceutical terrain by describing how she had cured herself of a skin malaise with the help of diluted bleach, and showed me the patch on her shin to prove it. "Completely cured!" she declared, beaming triumphantly.
I folded in awe and admiration, and pushed my metaphorical chips to her side of the table. Since then I don't play that game with her. When it comes to "practicality and moderation," no one can beat Isabel Hull.
Lest I be suspected of making a tu quoque argument here, let me be clear: I'm not. I am fully convinced that Hull practices what Knigge preached. Practicality is not about compromise; it's about efficacy. And moderation is not for the meek; it's for the rigorous.
I Hold Things Up
As a carpenter I learned, before you can leverage things apart
you have to find purchase. You have to have a place where a pry-bar
can be slipped in or driven with a hammer to separate.
That being done, whether by violent or pursuasive means,
when two factions have been split
they're easier to manipulate.
These are also political techniques.
They apply as well to sweaty things.
They dictate the tone and conditions of our species' life.
They reach into souls and wrench them.
Though pneumatic they're not ephemeral.
They're tough and mean as muscle.
As a carpenter I also learned
If you set a post upon a solid pier
and brace it well it will never
tilt in glory
it will simply know
I'm here to serve
I hold things up,
end of story.
by Jim Culleny
Personality or Ideology: Which matters most in a political leader?
by Emrys Westacott
In evaluating candidates for political office there are two main things to consider:
a) their ideology–that is, their political views and general philosophy
b) their personal qualities
With respect to ideology, the most important questions one should ask are these:
· Are their beliefs true? (Do they hold correct beliefs on, say, climate change, or on whether a particular policy will increase or reduce poverty, crime, unemployment, pollution, or the likelihood of war?)
· Do I share their values and ideals? (E.g. Are they willing to sacrifice economic growth for the sake of environmental protection (or vice versa)? Where do they stand on issues like gun control, abortion, euthanasia, capital punishment, foreign aid, gay rights, or economic inequality?)
· Whose interests do they represent? (Do they generally favor policies that benefit the rich, the middle class, the poor, employers or workers, corporations or consumers, cities or rural communities?)
Regarding personal qualities, the ones that matter most are:
· knowledge – Are they decently informed about the world and the issues they will be dealing with
· intelligence – Are they able to understand and think through complex problems
· wisdom – Are they reasonable? Do they exercise good judgment?
· effectiveness – Do they have the practical skills to realize their goals?
· integrity – Are they truthful? Is what they do consistent with what they say? Are they motivated by a concern for the public good rather than by self-interest?
These personal qualities obviously cannot be possessed absolutely but only to a greater or lesser degree. And they may often conflict. Most politicians who are effective sometimes have to compromise their integrity, and the first compromise is invariably made before they hold office. As the historian George Hopkins (emeritus professor at Western Illinois university) has observed, "all presidents lie for the simple reason that if they didn't, we wouldn't elect them." A candidate who was perfectly truthful would be ineffective because they would probably never get the chance to implement any of their ideas.
Effective governance may also require leaders to lie, mislead, hide the truth, and break promises. Franklin Roosevelt was by any account a highly effective president; but in the two years prior to Pearl Harbor, he consistently told the American public that he was fully committed to keeping the US out of any foreign wars while simultaneously, and secretly, preparing the country for war against Japan and Germany. The political leaders we are most inclined to venerate are those like Lincoln or Mandela who, in addition to possessing the other qualities listed above, somehow mange to be practically effective with minimum loss of integrity.
Many people on both sides of the political spectrum downplay the importance of the personal in politics. Marxists, for instance, typically focus on the ideological outlook of political parties and candidates. From this perspective, paying attention to the personal–whether someone is a good parent, kind to animals, or someone you'd like to have a beer with–is to be distracted by irrelevancies. Obsessing over personal narratives is seen as one of the ways the media trivialize politics and deflect attention away from the substantive issues at stake. What really matters is the objective question: whose interests does a politician represent and serve?
On the right also, ideology often takes precedence. Consider the pledge made by all the presidential candidates in the Republican primary earlier this year: "I _________ affirm that if I do not win the 2016 Republican nomination for president of the United States, I will endorse the 2016 Republican presidential nominee regardless of who it is." (italics added) Back in March 2016, the eventual nominee could conceivably have been anyone. But better a white supremacist or a certifiable psychopath in the White House than someone who was not a Republican.
In normal circumstances, I, too, prioritize ideology over personal qualities when deciding whom to support. That's because most political candidates do cross a basic threshold when it comes to knowledge, wisdom, integrity, etc.. And once over this threshold, the differences are not usually great. For instance, in 2012, when Obama and Romney were competing for the US presidency, the important differences were ideological. Romney was a reasonably knowledgeable and intelligent person who had proved himself a capable administrator and was not glaringly corrupt. What separated him from Obama were their differences on matters like taxation, welfare, and the environment.
In the 2016 presidential election, however, things are not normal. To be sure, Hilary Clinton and Donald Trump differ in their general political outlook. E.g. Trump, in accordance with the Republican platform, wants to abolish the estate tax–a measure that would materially benefit everyone with estates worth more than $5.4 million ($10.9 million for couples). Democrats, including Clinton, oppose this idea. But this time ideology has to take a back seat.
The reason is simple and, to my mind, obvious. Trump doesn't cross the basic threshold of acceptability when it comes to at least three of the five personal desiderata listed above: viz. knowledge, wisdom, and integrity.
I actually don't worry much about Trump's political philosophy. There are two reasons for this. First, he doesn't really have one. On many issues–e.g. nuclear proliferation; the minimum wage; the national debt–he's taken wildly different positions on different occasions, some of these occasions only minutes apart!. He basically says whatever he thinks will secure him some short term goal, such as winning the Republican primary, or, more commonly, getting lots of people clapping, cheering, and chanting his name. Second, his most notorious proposals–e.g. banning Muslims from entering the country, or returning to the gold standard–simply aren't going to happen in any possible universe.
But I do worry about Trump's obvious personal deficiencies because the power of the presidency makes such a person dangerous. Imagine, just for argument's sake, that sometime, somewhere, people in some benighted American state elected an ignorant, egotistical, mendacious, congressman to represent them in the House. How much damage could that individual do single-handedly? The answer is: not much. The presidency is different. Put simply, I'd sooner have a right wing ideologue as president than someone with a serius personality disorder.
There is a reason why Trump is so lacking in the personal qualities one would hope for in a political leader. He has a fairly severe mental problem. The Diagnostic and Statistical Manual of Mental Disorders (DSM) sets out a list of criteria for judging if someone has narcissistic personality disorder. Here are a few:
· having an exaggerated sense of self-importance
· exaggerating your achievements and talents
· believing that you are "special"
· requiring constant admiration
· being obsessed with fantasies of your success, fame, power, brilliance, sexual prowess, etc.
· having a sense of entitlement
· behaving in an arrogant manner
· taking advantage of others to get what you want
· lacking an empathetic understanding of how others feel
Psychiatrists are not supposed to diagnose people they haven't examined personally. But some have set that rule aside, either because they are so worried about the prospect of Trump gaining power, or because they think he's so clearly pathologically narcissistic that labeling him a narcissist is hardly a risky call. Indeed clinical psychologist George Simon, an authority on the manipulative personality, says that he uses videos of Trump to illustrate various symptoms of narcissism.
Some conservatives who find the Donald distasteful argue that Hilary Clinton's personal failings make her no better than Trump. But this is nonsense. On every count–knowledge, intelligence, wisdom, effectiveness, and integrity–Clinton is in a different league. Of course, when it comes to the last category, integrity, she is eminently criticizable for her opportunism, evasiveness, untruthfulness, and apparent cupidity. But against this one should also set her many years of public service and hard work on behalf of worthy causes. Relative to her peers, Clintons integrity score is disappointingly average. Trump's is off the chart–at the low end.
To their credit, a few Republicans, like Senators Lindsay Graham and Susan Collins, have publicly said that they will not support Trump. But the majority, even though one assumes they privately believe him unfit for office, either publicly endorse him or maintain a discrete silence. Their position is thoroughly reprehensible, a form of moral treason committed for selfish reasons. A person of Trump's stamp is, as a Washington Post editorial put it, "a threat to the Republic." One can only hope that not only will Trump be handily defeated in November but also that his enablers in the Republican party will eventually suffer the shame their pusillanimity deserves.
Facebook's responsibilities to research subjects
by Libby Bishop
Amid the latest privacy kerfuffle in which WhatsApp agreed to sell users' data to its parent Facebook, an article published by Jackman and Kanerva in the Washington and Lee Law Review Online that describes new procedures for research review at Facebook could be deemed inconsequential, or at best, ironic. Even readers familiar with the outcry over Facebook's "emotion contagion" experiment might conclude, with boyd (2015), that Institutional Review Boards are not the solution (IRBs are committees that assess the ethics of federally funded research in the U.S.), and move on to the next item in their newsfeed. That would be a mistake, for there is more at stake here. First, Facebook has over 1.6 billion users, all of whom are potentially its research subjects and thus, would be affected by these procedures. Second, the authors hope the principles they present will "inform other companies" (Microsoft has also recently formed a review group https://vimeo.com/134004122.) Most important, however, this new system at Facebook provokes urgent questions about the role of review systems in achieving ethical research.
The Facebook contagion experiment
In 2010, researchers at Facebook and Cornell University published research that provided evidence that online social networks can transmit large-scale emotional contagion (Kramer, et al., 2014). The experiment demonstrated that reducing positive inputs to users' feeds resulted in users posting fewer positive, and more negative posts, and when negative inputs were reduced, the pattern was reversed: there were more positive and fewer negative posts. Kramer et al. emphasised the meaning of their findings: emotional contagion had been shown to occur without face-to-face and non-verbal cues. The change was small but statistically significant. Moreover, the authors pointed out that small changes can have "large aggregated consequences" (the sample size was 689,003) in part because of connections between emotions and off-line behaviour in areas such as health.
The import of the findings was swamped by the ensuing public outcry about the methodology, in particular, the manipulation of users' feeds, and hence emotions, without their consent. But a key question that emerged was the issue of research review: had the project been subjected to any formal ethical review, and if not, why not? Editors of the journal where the article had been published wrote an Expression of Editorial (Verma, 2014) stating that Cornell had confirmed that the research did not fall under the purview of their Human Research Protection Program because the experiment had been done at Facebook and not Cornell. Furthermore, because the research was not federally funded, it was not required to go through an IRB (boyd, 2015).
Facebook's new research review process
Earlier this year, Jackman and Kanerva, two Facebook managers, published an article describing the new research review process now implemented at Facebook (Jackman and Kanerva, 2016). Facebook had announced an internal review process in October 2014, four months after the contagion experiment was published, but no detail about it had been provided. Even though their article does not mention the contagion experiment, it seems highly likely that this new review process is a response to the earlier controversy. Both authors were hired at Facebook in 2015: Kanerva managed an IRB at Stanford University, and Jackman completed her Ph.D. in political science in 2013, also at Stanford.
Their article describes and defends the research review process. Training "related to privacy and security" is deemed central to an effective review process, and three levels are offered: 1) all employees receive mandatory "onboarding" (or "socialization" about data access and privacy); 2) researchers working with data attend "bootcamp", and 3) members of the research review group (and area experts) must take human subjects training provided by the National Institute of Health. Managers of the Facebook research team decide on the appropriate level of review: ‘expedited' (also called standard) or ‘extended'. Extended review includes the researcher, and adds other Facebook experts in law, policy, and others. There are no standing external (non-Facebook) members, but they could be called in if needed.
In describing the decision-making procedures, Jackman and Kanerva say "our basic formula is the same as an IRBs [sic]: We consider the benefits of the research against the potential downsides." While stressing repeatedly that there is no "one size fits all" and that every proposal is different, they enumerate four criteria that are taken into consideration for any research. First, "we consider how research will improve our society, our community, and Facebook". Second, any "potentially adverse consequences" such as privacy and security are taken into account. Third, the research needs to be "consistent with people's expectations". Finally, they take "precautions designed to protect people's information."
Unanswered questions about Facebook's research review process
When introducing a new programme, one that has been implemented but probably not yet been extensively used, a lack of exhaustive detail can be forgiven. Nonetheless, there are some ambiguities and omissions compared to similar procedures at university IRBs and, in the U.K., research ethics committees.
Jackman and Kanerva conclude in their article with "lessons learned" from their experiences at Facebook. They hope these will inform others creating similar review processes. They identify "inclusiveness" as one of these key lessons, saying "including researchers and managers in the deliberations leads to faster turn-around and more informed decision-making". This strongly implies that the managers whose projects are under review sit as members of the group assessing their own projects, but that is not made explicit in their discussion. They say the expert group works by consensus. Does the need for consensus include external members? If external and internal members disagree, there is no indication of how this would be resolved. The authors also mention the existence of a separate ‘privacy group', but no further detail is provided. What if the privacy and research groups recommend different levels of review?
Concerns about openness, independence, and conflicts of interest
The authors suggest that "openness" is a core value; it is itemised as the second "lesson learned" in the introduction (p445). But oddly, openness is then not mentioned anywhere else in the article. In the conclusion, the second lesson learned has been changed to "inclusiveness" (p456), demonstrated by the fact that activities of the group are accessible. However, this is true only for Facebook employees; there is no mention of any access for customers, users, or the public. Despite the openness claim, names of those in the review group are not disclosed. This is quite different to the practice in universities. The University of Essex Research Ethics Committee (on which I sit) publishes its members' names, and even private U.S. universities, such as MIT, openly publish names.
Such limited openness would be less troubling if it were not accompanied by questions of lack of independence. According to The Wall Street Journal, Facebook managers have authority to approve projects, and sit on decision-making groups. More broadly, the Facebook process has to be regarded as a form of self-regulation, as no external assessment is required at any stage. Again, this differs from at least some university procedures (e.g., University of Essex), where regular external audits are required. Garfinkel understates the situation: "self-regulation does not have a good track record"; the current review regime of the Common Rule and IRBs emerged in response to egregious failures of self-regulation in Nazi Germany, Tuskegee and elsewhere.
Perhaps the most serious concern is that Jackman and Kanerva do not address the existence of a possible conflict of interest that members of review group may face. If the proposed research is ethically dubious yet is obviously seen to benefit Facebook financially, how might this be addressed? While "improved products and services" are noted as company objectives, there is no mention of profit, revenue, advertising revenue, etc. If, as seems nearly certain, all Facebook's' employees have incentives (direct or indirect) to enhance its share price, then what structures are in place to ensure research subjects' privacy and other interests are not compromised? As commentators on the WhatsApp controversy noted, Facebook now has numerous examples of putting the "company's needs ahead of its users'", and thus such concerns are not idle speculation.
It is not only the IRBs that need to change
There is a rapidly growing body of literature on how to improve the ethics of big data research (Metcalf, et al., 2016), involving suggestions for how IRBs and researchers should attend to conceptual gaps that big data present for established ethical frameworks and procedures (Zimmer, 2016). An equally important, but less discussed point, is that it is necessary for accommodation to go in the opposite direction, namely, for big data researchers to acknowledge that although their data may be "big", and take "new and novel forms," the underlying ethics questions arising have, in all likelihood, already been posed in some form. And effective review processes have protocols and procedures to enable broad debate, and deep moral reflection, about these questions.
I believe this awareness is missing in the process presented by Jackman and Kanerva. One indication is a sentence highlighted above: "our basic formula is the same as an IRBs [sic]: We consider the benefits of the research against the potential downsides." They are referring to one branch of moral theory, consequentialism, which judges moral decisions by their results, their consequences. Cost-benefit analysis is derived from utilitarianism, which is a form of consequentialist moral theory.
Beyond cost and benefit
Jackman and Kanerva make no reference to alternative theories or frameworks that are, in most applied ethics teaching, seen as essential to good moral reasoning. A key omission here is that of approaches that emphasise rights and duties (i.e., deontological moral theory). This matters because while utilitarian approaches are most useful for ensuring that acts that benefit the many are suitably weighted, conversely, rights-based approaches ensure that the rights of the few are not trampled by the gains for the many.
Jackman and Kanerva are not unaware other perspectives; they reference publications that draw on such rights-based theories. In footnote 33, they cite a publication by the European Data Protection Supervisor, "Towards a New Digital Ethics: Data, Dignity, and Technology", which proclaims:
"The dignity of the human person is not only a fundamental right in itself but also is a foundation for subsequent freedoms and rights, including the rights to privacy and to the protection of personal data48."
The challenge then, is how to implement research review so that that it brings to bear not only thinking about costs and benefits of research, but also the fundamental dignity and rights of persons.
To be clear, I am not advocating that research review should be a seminar in philosophical theory (although there are worse ideas), but what research review can accomplish is to expose researchers to broader concerns, a variety of disciplinary perspectives, diverse ethical frameworks and theories, all of which enhance the quality of both the assessment process itself and the decisions reached. Garfinkel explains in more detail:
"IRB-mandated education and training provides many scientists the background information and intellectual framework to make sophisticated ethical decisions—training that many data scientists might not otherwise receive."
It is just this quality of thinking and reflection that might have brought issues such as conflicts of interest to the foreground in the development of Facebook's new process.
The final lesson Jackman and Kanerva present in their article is that of "flexibility". They claim to be committed to incorporating feedback and "improving our research process over time." Whether or not we are one of Facebook's 1.6 billion research subjects, let us hope they mean what they say.
* * *
boyd, d. (2015). "Untangling Research and Practice: What Facebook's "Emotional Contagion" Study Teaches Us." Research Ethics 12(1): 4-13.
Jackman, M., & Kanerva, L. (2016). "Evolving the IRB: Building Robust Review for Industry Research". Washington and Lee Law Review Online,72(3), 442.
Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). "Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks." In Proceedings of the National Academy of Sciences, 111(24), 8788-8790.
Verma, I. (2014) "Editorial Expression of Concern and Correction". Proceedings of the National Academy of Sciences 111(29): 10779.
The Culture of Information Technology
by Mathangi Krishnamurthy
"Kar le kar le, tu ik sawaal,
Kar le kar le, koi jawaab,
Aisa sawaal jo zindagi badal de…
[Ask a question,
Try and answer,
The kind of question that will change your life]
It's just a question of a question."
—Title track, Kaun Banega Crorepati
Light bursts forth like rays from the sun. The Indian film star Shahrukh Khan pirouettes across a set, made deliberately larger than life. It is glitzy, neon inundated and disproportionate. Women in some form of modernized traditional Indian clothing stand behind the so-called King Khan as he exhorts the audience to ask a question. The irony, of course, is that in this Indian version of "Who wants to be a millionaire?" it is Khan who asks the questions. As he swiftly changes clothes from scene to scene, a rapper in one moment, a suave sleazy conman of some sort in the other and an overgrown American teen hipster in yet another, his supporting cast range from close cropped capped rappers to women of unidentified nationality in golden and silver lamè. In another frame, Shahrukh in waistcoat and trousers dances with women in tartan mini-skirts and white shirts. They all gyrate to a catchy tune that repeats the mantra of the one question that can change lives.
Slowly seducing the audience with song and dance, Shahrukh coaxes them into participation, insisting that they must come out with their deepest desires since this opportunity might not arise again. Assuring them that they will win the game he asks them to strengthen their hopes. He ends with the oxymoronic question "Is a hot chick cool or a cool chick hot?" On the poorly manifested and highly pixellated version that I watch on the Internet, the paucity of this content seems glaringly obvious.
Danny Boyle's film Slumdog Millionaire, is set in Mumbai and chronicles the unexpected success of a contestant on Kaun Banega Crorepati, the Indian version of Who Wants to be a Millionaire. A rags-to-riches chronicle of a protagonist called Jamal Malik who wins the game show, the plot is nothing if not predictable. The twists in the plot and the form of resolution are, however, what are interesting to this essay. Jamal is also what Prem, the character who portrays Shahrukh's counterpart in this reel life version of reel life, refers to as a slumdog. By winning the game's prize of Rupees one crore, Jamal stands as testimony to what chance can offer even the most underprivileged, as long as they have the hunger to grab it.
In the film, Jamal is an orphan from the slums. The main plot revolves around Jamal's love for his childhood companion, Latika, who was tragically lost to him when escaping child trafficking slumlords in Mumbai. This plot is furthered through the game show that he accidentally gains access to, when working at a call center. Through the questions that he answers correctly, the audience is made privy to the details of Jamal's life during the course of which he overheard and absorbed information that one would not consider within the sphere of possibility for an underprivileged lower class citizen of India. So, for example, Jamal knows that Cambridge Circus is in London because he has absorbed the communication lessons taught in the call center as he serves chai to the "phone-wallahs". Similarly, he knows that the Hindu god Rama holds a bow in his right hand, because he espied a young boy in costume when running away from Hindu rioters who attacked his slum. He also knows that the picture of Benjamin Franklin peeps out of a hundred dollar note because he was once guide to American tourists visiting Agra and the Taj Mahal.
Kaun Banega Crorepati has been one of the most successful television serials of the recent past. Running for many consecutive seasons on the channel Star TV, it debuted with much fanfare and employed as its host one of the longest reigning superstars of the Hindi film industry, Amitabh Bachchan. The serial then not only offered contestants a chance to win large sums of money but also live out the fantasy of being intimate with a film star. Amitabh Bachchan is no ordinary hero. Tabloids have long sung paeans to this lanky star of unlikely heritage and deep-throated baritone. Born to distinguished parents in northern India, his father the poet laureate Harivanshrai Bachchan, the Big B as he is known was rejected by the film fraternity in his first few years on account of being too lanky and not good looking enough. Finally making his fortune in the eighties through a string of films where he portrayed the angry young man who violently attacks a corrupt system, often at great personal cost and sometimes, loss of life, he went on to make some of the highest grossing films in Indian cinema.
In his current films, he prefers to portray an ageing patriarch seeking to keep together large families of upper middle-class men and strong, yet traditional women that live in castles and travel in helicopters. Bachchan hosted two seasons before ending the contract which was subsequently offered to Shahrukh Khan, also one of the highest paid stars in Bollywood history, one known in earlier stages of his career for taking on a plethora of roles, including that of villains and anti-heroes. Shahrukh Khan, in order to distinguish himself from his superstar predecessor opted to make himself more accessible to the contestant and audience.
Kaun Banega Crorepati and Slumdog Millionaire share commonalities in the sense that each of these stories is about the very process of the search for information that seeks to combat rapid change. Information is hearsay. It is what we absorb every second of the way as we make our way through life's unrelenting lessons. The body is a receiver and the mind a processor. The atmosphere and the body then replay the articulation between the computer and the data that it is fed. Information is also ostensibly, the solution to what Richard Sennett has called the "the specter of uselessness". However to recognize true information is not easy; anything could be it. So while information might be a solution, its search is not a solution at all, but merely an activity meant to mimic activity. It is not my intent to say, for example, that Kaun Banega Crorepati or Slumdog Millionaire are films that dictate to young men and women, the idea of information technology culture. However in this long-short account, normality is affectively charged with the power of information technology as a story and a set of possibilities. If one imagines lives as being structured by desire — desire for a "better life", desire to be comfortable, desire to be independent, and desires to escape the very life that offers such possibility — then one must also ask as to the ether that produces the form of desire. Perhaps, Rene Girard's theorization of desire as mimetic and contagious may work as a heuristic to understand the relationship between the proliferation of media images around information technology in India, and daily experience. For desire in this analysis is very much a densely sedimented body of image, text, discourse and bodily experience. Those asking questions of the culture of Information Technology or IT must also then ask questions of the ways in which forms of work are simultaneously forms of desire.
Nguyen Phan Chanh (1892-1984). Channeling Experience with a Medium, 1931.