“Bees are to hives as neurons are to brains,” says Jeffrey Schall, a neuroscientist at Vanderbilt University. Neurons use some of the same tricks honeybees use to come to decisions. A single visual neuron is like a single scout. It reports about a tiny patch of what we see, just as a scout dances for a single site. Different neurons may give us conflicting ideas about what we’re actually seeing, but we have to quickly choose between the alternatives. That red blob seen from the corner of your eye may be a stop sign, or it may be a car barreling down the street. To make the right choice, our neurons hold a competition, and different coalitions recruit more neurons to their interpretation of reality, much as scouts recruit more bees.
Our brains need a way to avoid stalemates. Like the decaying dances of honeybees, a coalition starts to get weaker if it doesn’t get a continual supply of signals from the eyes. As a result, it doesn’t get locked early into the wrong choice. Just as honeybees use a quorum, our brain waits until one coalition hits a threshold and then makes a decision. Seeley thinks that this convergence between bees and brains can teach people a lot about how to make decisions in groups. “Living in groups, there’s a wisdom to finding a way for members to make better decisions collectively than as individuals,” he said.
Like you, we support calls to dismantle the security state and to promote the rule of law. But we do not see that one set of autocratic structures should be replaced by another which claims divine sanction. And while the overthrow of repressive governments was a victory and free elections are, in principle, a step towards democracy, shouldn’t the leader of a prominent human rights organization be supporting popular calls to prevent backlash and safeguard fundamental rights? In other words, rather than advocating strategic support for parties who may use elections to halt the call for continuing change and attack basic rights, shouldn’t you support the voices for both liberty and equality that are arguing that the revolutions must continue?
Throughout your essay, you focus only on the traditional political aspects of the human rights agenda. You say, for instance, that “the Arab upheavals were inspired by a vision of freedom, a desire for a voice in one’s destiny, and a quest for governments that are accountable to the public rather than captured by a ruling elite.” While this is true as far as it goes, it completely leaves out the role that economic and social demands played in the uprisings. You seem able to hear only the voices of the right wing—the Islamist politicians—and not the voices of the people who initiated and sustained these revolutions: the unemployed and the poor of Tunisia, seeking ways to survive; the thousands of Egyptian women who mobilized against the security forces who tore off their clothes and subjected them to the sexual assaults known as “virginity tests.”
From the response:
Western governments should reject this inconsistent and unprincipled approach to democracy. Human Rights Watch called on Western governments to come to terms with the rise of Islamic political parties and press them to respect rights. As rights activists, we are acutely aware of the possible tension between the right to choose one’s leaders and the rights of potentially disfavored groups such as women, gays and lesbians, and religious minorities. Anyone familiar with the history of Iran or Afghanistan knows the serious risks involved. However, in the two Arab Spring nations that have had free and fair elections so far, a solid majority voted for socially conservative political parties in Egypt, and a solid plurality did so in Tunisia. The sole democratic option is to accept the results of those elections and to press the governments that emerge to respect the rights of all rather than to ostracize these governments from the outset.
There was no Santa Claus in the Sarajevo and Bosnia and Herzegovina of my childhood. The white-bearded fat man who assessed the worth of children’s obedience and brought them presents was called Deda Mraz—Grandpa Frost. Having dispatched his proxies to schools and kindergartens in the preceding weeks, he showed up at your home in person (though always unseen) on New Year’s Eve, at midnight or so, just for you. He was non-denominational and non-ideological and delivered presents to all obedient children regardless of their ethnicity or political convictions. The old man was a civic, communal character, someone everyone waited for and was happy to see. He was welcome before the war, even during the war, but, it turns out, not so much after the war.
In December 2008, for instance, Deda Mraz received a punch in his fat gut from Arzija Mahmutović, who at the time was the director of the Children of Sarajevo, the public institution that operates twenty-four kindergartens in the city. Ms. Mahmutović refused to admit Deda Mraz to any of the kindergartens, because she believed (though she backpedaled some after the local and international outcry) that he had no place in Islamic tradition. She had no problem with parents allowing Deda Mraz to deliver presents to the children at some other place, beyond her righteous reach.
Thus was Deda Mraz cast into the pit of Bosnian politics, undergoing public humiliation that has become a kind of seasonal tradition after the war. Soon after the end of the war, for instance, Bosnian then-president Alija Izetbegović denounced the old man as a Communist fabrication. It must have been the blood-red suit that gave it away.
David Graeber’s Debt is, in the most positive sense, rather an old-fashioned book, in its conception and approach if not in its matey and approachable style. It ignores disciplinary boundaries within the human sciences, especially those between economics, history and social studies, in a manner that recalls polymaths like Max Weber or the free-wheeling early years of political economy with figures like Smith and Malthus. In its search for the connecting thread between an astonishing diversity of cultural practices and texts from across time and space, it resembles the early classics of speculative anthropology – not Malinowski but J.G. Frazer. In its ambition to offer an account of the trajectory of the whole of human history, it undoubtedly runs the risk of being confused with the likes of Jared Diamond or Niall Ferguson, but it strikes me rather as in the vein of Arnold Toynbee, not least in the weight of scholarship that underpins this work of imaginative reconstruction. I feel the need to stress again that I don’t offer these comparisons as a criticism.
Above all, the book’s starting position comes straight from nineteenth-century critical historicism: a sense of the importance of the past in shaping the present. Graeber’s evocation of Nietzsche and his provocative fantasies about debt and sacrifice in Chapter 4 seem to be a deliberate nod to this tradition. In Nietzsche’s account of the modern historical sense, humans are understood as being conscious of themselves as beings within time, who tell stories about the past and its relation to the present as a means of making sense of the world. Such stories, whether primitive myths or modern historiography, are never neutral or value-free descriptions of reality, but are shaped by our desires, and in turn – because we inherit and take for granted most such stories, rather than constructing them ourselves – they shape our conceptions and behaviour. Above all – and this is a point emphasised also by Marx (who plays a conspicuously minor role in Graeber’s book) – these stories serve to legitimise the present, to present it as a natural and inevitable state of affairs.
A right thumb, a finger, a tooth. These were the contents of a reliquary acquired several years ago by a collector at an auction in Florence. Little did he know that for centuries the remains had been objects of profane devotion. Last seen in 1905, they had been sliced from the corpse of Galileo, along with another finger and a vertebra, during his highly publicized reburial in the Basilica of Santa Croce in 1737 almost 100 years after his death, and preserved in a slender case fashioned of glass and wood and crowned with a carved bust of the scientist. The reliquary’s new owner consulted Galileo experts about his find, and after the authenticity of its contents had been verified he donated it to the Museo Galileo, which is tucked behind the Uffizi in a quiet piazza overlooking the River Arno. (A dentist asked by the museum to examine the tooth concluded that Galileo suffered from gastric acid reflux and ground his teeth in his sleep.) The rediscovered reliquary is displayed adjacent to a smaller one containing Galileo’s other finger, a prized museum possession since 1927. Nearby are several artifacts of Galileo’s scientific genius: a telescope presented to the Medici and the broken objective lens of the original device with which Galileo sighted Jupiter’s four satellites in 1610.
“We develop in multi-cultural and multi-religious societies. To say this is to state the obvious. There is no religiously homogeneous society.” Akeel Bilgrami has invited commentary on his recent working paper about the nature and relevance of secularism in which he advances a central thesis that begins with the conditional phrase, “Should we be living in a religiously plural society.” In this post, I offer a response to his thesis convinced, like Cardinal Jean-Louis Tauran, author of the quotation with which I began, that there is no such thing as a modern religious monoculture. As president of the Pontifical Council for Interreligious Dialogue, the apparatus of the Catholic Church established after Vatican II to serve as the site of engagement with the followers of other religious traditions, Jean-Louis Tauran has something of a professional commitment to pluralism as an ontological category. Tauran gave his 2008 speech on the necessity of cultivating channels of interreligious dialogue at a time when the stock of interreligious dialogue was clearly on the rise. Controversies like those sparked by the Jyllands-Posten cartoons of 2005 and Pope Benedict XVI’s September 2006 lecture on faith and reason, which offended many Muslims by seeming to endorse misleading criticism of Islam, led to a surge in post-9/11 interfaith initiatives. In response to the misunderstandings that informed the Pope’s lecture, 138 global Muslim leaders published “A Common Word Between Us and You” in October 2007, an open letter calling for a common ground of understanding and peace between Muslims and Christians, a period that also saw the launch of Tony Blair’s Faith Foundation and Cardinal Tauran’s initiatives to train clergy for interreligious dialogue in a pluralist world, both in 2008. Global modernity, it is clear, neither presages the necessary rise of a homogeneous consumer culture nor an inevitable decline in the vitality and variety of religious engagement.
more from Justin Neuman at The Immanent Frame here.
IN HER NOVELS AND in her nonfiction essays, Marilynne Robinson’s questions are always roughly the same: Who are we, and where did we come from? The first is a matter of metaphysics, the second of history. At least since the publication of her first collection of essays, The Death of Adam (1998), Robinson has been making it her business to remind us that these questions are not yet settled. We may be descended from apes, but that does not mean that we are essentially apelike. “It has been usual for at least a century and a half to think of human beings as primates,” she writes in her latest collection, When I Was a Child I Read Books, only to add, “I suppress the impulse to say ‘mere primates,’ since I suspect the other members of our great order are undervalued by us in the course of devaluing ourselves.” This is a characteristic Robinson turn—admit the dehumanizing point of your opponent, only to show how deep our humanity goes. When I Was a Child, by far Robinson’s most political work to date, turns her old questions to the problems now directly confronting us. The book is a defense of what she considers the grand traditions of American democracy—generosity, hope, and a radical openness to new experience—waged against a society that seems to believe itself in irreversible decline. At the same time, Robinson registers a profound note of disappointment at feeling, “on the darkest nights, and sometimes in the clear light of day, that we are now losing the ethos that has sustained what is most to be valued in our civilization.”
2012 promises to be one of the most important years in the history of particle physics. The exceedingly talented filmmaker, occasional 3QD contributor and old friend Liz Mermin is making a documentary about what is happening at CERN at the LHC. She has been releasing snippets of the documentary, all worth a look. Check for regular updates.
I got all teary-eyed around November, 2008, just like every other non-australopithecine American. But unlike most of my co-evolved concitoyens I was not a sucker. I was delighted that we would now have a rational and evidently morally decent person, rather than a cretinous one, volunteering to take on a role that is for structural reasons morally compromising. But I did not think for a second that this was the dawning of some sort of new era. That would be to misunderstand what a president is.
We have what in places like Turkey is lucidly described as a 'deep state' (though in Turkey it's principally the army that is had in mind, while for us it's a more composite beast). The deep state limits drastically what elected officals can do. It is the permanent structure that endures behind the constant electoral spectacle, and it ought to be the only thing of interest to political analysts. Do I blame Obama for the continuation of the Iraq War, the non-closure of Guantánamo, etc.? Just a little bit more than I blame his tailor. For Obama is, as they say, a suit, and many, many people conspire to maintain him as the presentable image of American power. I am incapable of conjuring any commiseration with the conventional liberals who believe disappointment in Obama the person is an appropriate reaction to his record as president.
However little Obama interests me, the current clamoring of the Republican candidates is of an altogether different order of uninterestingness.
What so far distinguishes the revolutionary upsurge that we have been watching across the Arab world from its many predecessors? One of the apparent distinctions is that in Tunisia, Egypt, Bahrain and several other countries, it has so far been largely peaceful: “Silmiyya, silmiyya” the crowds in Tahrir chanted. But so were many of the great Arab risings of the past. These included many episodes in Egypt and Iraq’s long struggles to end British military occupation, and those of Syria, Lebanon, Morocco and Tunisia to end that of France, not to speak of the first Palestinian intifada against Israeli occupation from 1987-1991. While tactics of non-violence were broadly employed in the recent uprisings in Egypt and elsewhere, this is by no means the first time that Arab uprisings have been largely non-violent, or at least unarmed.
It has also been said that what distinguishes these revolutions from earlier ones in the Arab world and elsewhere in the Middle East is that they are focused on democracy and constitutional change. It is true that these have been among their most central demands. But this is not entirely unprecedented. There was sustained constitutional effervescence in Tunisia and Egypt in the late 1870’s until the British and French occupations of those countries in 1881 and 1882. Similar debates led to the establishment of a constitution in the Ottoman Empire in 1876 that lasted with interruptions until 1918. All the successor states to the Ottoman Empire were deeply influenced by this chequered constitutional experiment. In 1906, Iran established a constitutional regime, albeit one that was repeatedly eclipsed. In the inter-war period and afterwards, the semi-independent and independent countries in the Middle East were mainly governed by constitutional regimes.
In 1993 Francis Crick and Edward Jones published an essay in Nature titled “Backwardness of human neuroanatomy.” They lamented our poor knowledge about the anatomy and connectivity of the human brain compared to that of the macaque monkey brain, especially for the visual system. “Clearly,” they wrote, “what is needed for a modern brain anatomy is the introduction of some radically new techniques.” Networks of the Brain, by Olaf Sporns, heralds a new era in neuroanatomy based on major advances in brain imaging and brain reconstruction that have been made since Crick and Jones’s commentary nearly 20 years ago. Sporns’s goal is to connect neuroscience with network science, the study of complex networks.
In the book’s early chapters, Sporns covers general principles of network science and offers background on the structure and dynamics of brain networks based on his research as well as that of many others, including some from my own laboratory. This prepares readers for the heart of the book, chapter 5, “Mapping Cells, Circuits, and Systems,” in which the author introduces modern imaging techniques.
Born in 1933, Myrlie Evers-Williams was the wife of murdered civil rights activist Medgar Evers. While fighting to bring his killer to justice, Evers-Williams also continued her husband's work with her book, For Us, The Living. She also wrote Watch Me Fly: What I Learned on the Way to Becoming the Woman I Was Meant to Be. Evers-Williams served as chair of the NAACP from 1995 to 1998.
Surprised by the moral outrage expressed by some over the depiction of blacks in The Help, civil rights journalist and activist Myrlie Evers-Williams pens a moving letter at the Hollywood Reporter in defense of the award-winning film.
My mother was “the help.” And so was her mother. I’m telling you these things because they were courageous and they were not alone in their courage. Legions of black women like them — maids and waitresses and caretakers who fanned out across Vicksburg and Mississippi and the South to work in the homes and restaurants and hotels owned, operated and occupied by whites — practiced small measures of courage every day by facing constant violent threat and institutionalized racism instated by the very people they were charged with feeding, rearing and caring for their children. Theirs is an American story that is rarely told on any grand, meaningful scale — not one, at least, that defies stereotype and caricature. But recently, “The Help,” a film based on Kathryn Stockett’s bestselling book of the same name, became a cultural touchstone when two of its lead characters, both African-American maids in the then-staunchly segregated Mississippi, challenged viewers to walk their journey — to see, as lead protagonist, Abileen Clark, said, “what it felt like to be me.”
To me, The Help is this year’s most outstanding and socially relevant motion picture; Viola Davis’ quiet but powerful portrayal of Abileen made us all take notice of a historically invisible class of women and Abileen’s story, along with those of the other maids who rallied with her to tell it, remind us that when we speak, if only in a whisper, momentous things can happen.
More here. (Note: In honor of African American History Month, we will be linking to at least one related post throughout February. The 2012 theme is Black Women in American Culture and History).
Just what does it mean to get a green card? To some applicants, about $1,000 each month.
A recent study by a University of Nevada, Reno economist and a graduate student found that employer-sponsored workers in the United States on temporary visas who acquire their green cards and become permanent residents increase their annual incomes by about $11,860. They studied data from The New Immigrant Survey, a collaborative study of new legal immigrants funded in 2003 by the U.S. Immigration and Naturalization Service and other public and private partners. The study, “The Value of an Employment-Based Green Card,” by associate professor Sankar Mukhopadhyay and former graduate student David Oxborrow in the College of Business, was published this month in the journal, Demography. According to the U.S. Department of Homeland Security, from 1999 to 2008, about 1 million green cards were approved each year. The majority of these, 74 percent, were to those sponsored by family or with immediate relatives who are U.S. citizens. However, about 15 percent of those approved for green cards were classified as “employment-based applicants.” These workers are mostly highly educated and highly skilled with college degrees, here on work visas for up to six years. The average wait to obtain a green card, however, is six to 10 years. Of those workers here on this particular type of visa, about 56 percent end up being successful in obtaining their green cards.
From the jaded, not to say cynical, observer of international politics, the passing of the American Century elicits a more ambivalent response. I’d like to believe that the United States will accept the outcome gracefully. Rather than attempting to resurrect Luce’s expansive vision, I’d prefer to see American policy makers attend to the looming challenges of multipolarity. Averting the serial catastrophes that befell the planet starting just about 100 years ago, when the previous multipolar order began to implode, should keep them busy enough. But I suspect that’s not going to happen. The would-be masters of the universe orbiting around the likes of Romney and Obama won’t be content to play such a modest role. With the likes of Robert Kagan as their guide—”It’s a wonderful world order,” he writes in his new book, The World America Made (Knopf)—they will continue to peddle the fiction that with the right cast of characters running Washington, history will once again march to America’s drumbeat. Evidence to support such expectations is exceedingly scarce—taken a look at Iraq lately?—but no matter. Insiders and would-be insiders will insist that, right in their hip pocket, they’ve got the necessary strategy.
more from Andrew J. Bacevich at The Chronicle of Higher Education here.