Thursday, May 28, 2015
Daniel Nexon, over at Duck of Minerva:
We now have a lot of different meta-narratives about alleged fraud in “When Contact Changes Minds: An Experiment in the Transmission of Support for Gay Equality.” These reflect not only different dimensions of the story, but the different interests at stake.
One set concerns confirmation bias and the left-leaning orientations of a majority of political scientists. At First Things, for example. Matthew J. Franck contrasts the reception of the LaCour and Green study (positive) with that of Mark Regnerus’ finding of inferior outcomes for children of gay parents (negative). There’s some truth here. Regnerus’ study was terminally flawed. LaCour and Green’s study derived, most likely, from fraudulent data. Still, one comported with widespread ideological priors in the field, while the other did not. That surely shaped their differential reception. But so did the startling strength of the latter’s findings, as well as the way they cut against conventional wisdom on the determinants of successful persuasion.
We might describe another as “science worked.”
This narrative sometimes strays into the triumphalist: rather than exposing problems with the way political science operates, the scandal shows how the discipline is becoming more scientific and thus more able to catch—and correct—flawed studies. Again, there’s something to this. To the extent that political scientists utilize, say, experiments, then that opens up the possibility of creating fraudulent experimental data but also of uncovering such fraud.
Peter Watts in Aeon (Illustration by Richard Wilkinson):
Rajesh Rao (of the University of Washington's Center for Sensorimotor Neural Engineering) reported what appears to be a real Alien Hand Network – and going Pais-Vieira one better, he built it out of people. Someone thinks a command; downstream, someone else responds by pushing a button without conscious intent. Now we're getting somewhere.
There’s a machine in a lab in Berkeley, California, that can read the voxels right off your visual cortex and figure out what you’re looking at based solely on brain activity. One of its creators, Kendrick Kay, suggested back in 2008 that we’d eventually be able to read dreams(also, that we might want to take a closer look at certain privacy issues before that happened). His best guess was that this might happen a few decades down the road – but it took only four years for a computer in a Japanese lab to predict the content of hypnagogic hallucinations (essentially, dreams without REM) at 60 per cent accuracy, based entirely on fMRI data.
When Moore’s Law shaves that much time off the predictions of experts, it’s not too early to start wondering about consequences. What are the implications of a technology that seems to be converging on the sharing of consciousness?
It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. The neuroscientist Giulio Tononi at the University of Wisconsin-Madison claims that consciousness reflects the integration of distributed brain functions. A model developed by Ezequiel Morsella, of San Francisco State University, describes it as a mediator between conflicting motor commands. The panpsychics regard it as a basic property of matter – like charge, or mass – and believe that our brains don’t generate the stuff so much as filter it from the ether like some kind of organic spirit-catchers. Neuroscience superstar V S Ramachandran (University of California in San Diego) blames everything on mirror neurons; Princeton’s Michael Graziano – right here in Aeon – describes it as an experiential map.
I think they’re all running a game on us. Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention).
But why would any of that be self-aware?
Shail Maryam in The Hindu:
On my last day in Tunis I was finally able to perform my ziyarat (pilgrimage) to the mausoleum of the great Sufi, Abu al-Hasan ash-Shadhili, popularly known as Imam ash-Shadhili or Sidi Belhassan. It was an overwhelming experience. An all-woman zikr was in progress when I entered the hall. The men had been relegated to an outer room and the inner hall reverberated with women’s voices singing a song about the saint. Zikr (dhikr) has many meanings ranging from prayer to recitation to repetition of an expression of praise. Here it culminated in a trance-generating incantation of ‘Allahu Akbar’ with the repetitive ‘Akbar, Akbar, Akbar’ becoming like an ‘Om’ or a Buddhist chant. I had come to Tunis to participate in a panel at the World Social Forum 2015, part of an initiative of the South Asian Dialogues on Ecological Democracy, to engage in a larger global debate on Islam and democracy. My presentation focussed on the philosophical contribution of Sufi brotherhoods such as Chishtis, Qadiris and Madaris as also of independent qalandars in the Indian subcontinent. The Chishtis and Qadiris are close cousins of the Shadhili (Shazili) brotherhood, which was important in Egypt, Tunisia, Algeria and Morocco. Sidi Belhassen was a Shadhili Sufi who came from Morocco and established his first zawiya in Tunis in 1227. In India, the Chishti order had already been established by Muinuddin Chishti from Chisht, Afghanistan. Some Shadhili and Chishti Sufis are authors of philosophical treatises.
The big question, of course, is why the antipathy between Sufis and Salafis is becoming a civil war in Islam. Indeed, the war itself is fairly one-sided, as the other side is the victim of the attack and has no strategy for a concerted counter-attack. But without romanticising either Sufism — any ‘ism’ is problematic — or the “good Muslim”, we only have to peruse early Sufi medieval texts to see how Sufi philosophies provide major sources of resistance to Salafist and other exclusionary ideologies. They go back to a period when religion and philosophy were not yet divorced. These philosophies also suggest Islam’s civilisational dialogue with Greek and Hindu-Buddhist philosophies.
A few years ago, in Pakistan, I had visited the mausoleum of the great Sufi Abul Hassan Ali Hajvari, popularly called Daata Sahib (990-1077), now behind barbed wire after its bombing in 2010. Abul Hassan Ali Hajvari is the author of Kashf Al Mahjub or The Revelation of the Veiled, a text in Persian that the philosopher Ghazala Irfan teaches at the Lahore University of Management Sciences. I had also made another pilgrimage to Pakpattan where the mausoleum of Baba Farid, one of the great Chishti Sufis, had been similarly attacked.
Wednesday, May 27, 2015
Kuniyoshi’s innovative genius came in his blending of Japanese idioms with American folk art influences as well as that of European modernism. ""His work is a distinctive expression of many strands of early twentieth-century American art flavored with his sly humor, idiosyncratic imagination, personal experience, and subtle references to his Japanese heritage," writes Moser in an essay.
It was during early visits to an artists colony in Ogonquit, Maine, sponsored by his friend and patron Hamilton Easter Field, that led Kuniyoshi to the kind of flattened spaces, squat figures and diminishing of single point perspective that marked his work, says Wolf, a professor of art at Bard College.
A visit to Europe in 1925 gave a more provocative tone to Kuniyoshi's work, as well as an interest in circuses. His 1925 Circus Girl Resting gained wide renown when it was chosen as part of a 1947 U.S. State Department funded exhibition “Advancing American Art,” a sort of traveling show of cultural diplomacy that also featured work by Hopper, O’Keeffe, Stuart Davis and Marsden Hartley.
There may be a sense in which the Greek crisis is indeed our era’s Bolshevik Revolution or Spanish Civil War, namely that it has become the destination of choice for what we might call “political travel.” Political travel involves immersing yourself in the domestic concerns of another country on the basis of their putative significance for the world at large. This can involve the desire to be there when it all happens, but it doesn’t have to—what is crucial is the desire to throw your heart and soul into mastering the internal complexities of a far-off land, in hopes of being there intellectually when it all happens. Political travel is easy to mock, but at root it reflects a perfectly respectable desire to understand your world and to change it. The problem is that like any travel it runs the risk of turning into tourism: the consumption of an “other” neatly packaged to fit into our existing mental landscape without disturbing or unsettling it.
There was a lot of (mostly leftist) political tourism over the last century, from extreme cases like Foucault on the Iranian Revolution to more forgivable ones like Chomsky on Chávez or Zizek on the Arab Spring. But the archetypal political tourist was probably Lord Byron, who joined the Greek struggle for independence in 1823.
Kevin Hartnett in Quanta (image Hannes Hummel for Quanta Magazine):
Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.
Now, as their train approached the city, Voevodsky pulled out his laptop and opened a program called Coq, a proof assistant that provides mathematicians with an environment in which to write mathematical arguments. Awodey, a mathematician and logician at Carnegie Mellon University in Pittsburgh, Pa., followed along as Voevodsky wrote a definition of a mathematical object using a new formalism he had created, called univalent foundations. It took Voevodsky 15 minutes to write the definition.
“I was trying to convince [Awodey] to do [his mathematics in Coq],” Voevodsky explained during a lecture this past fall. “I was trying to convince him that it’s easy to do.”
The idea of doing mathematics in a program like Coq has a long history. The appeal is simple: Rather than relying on fallible human beings to check proofs, you can turn the job over to computers, which can tell whether a proof is correct with complete certainty. Despite this advantage, computer proof assistants haven’t been widely adopted in mainstream mathematics. This is partly because translating everyday math into terms a computer can understand is cumbersome and, in the eyes of many mathematicians, not worth the effort.
For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.
Justin E. H. Smith over at his website:
In the online activity many young people in North America mistake for political engagement, 'white' has become a peculiar sort of insult: a flippant meme masquerading as a serious analytic category. We witness today a constant jockeying for prestige, almost entirely among white men, in which each one strives to publicly display that he is the first and only to have overcome the various pathologies, real and imagined, of white-man-hood. As the sharp critic Fredrik DeBoer has observed, this impoverishment of political debate now leaves us with the obscene and absurd phenomenon of the 'White Off':
A White Off is a peculiar 21st-century phenomenon where white progressives try to prove that the other white progressives they’re arguing with are The Real Whites. It’s a contest in shamelessness: who can be more brazen in reducing race to a pure argumentative cudgel? Who feels less guilt about using the fight against racism as a way to elevate oneself in a social hierarchy? Which white person will be the first to pull out “white” as a pejorative in a way that demonstrates the toothlessness of the concept? Within progressivism today, there is an absolute lack of shame or self-criticism about reducing racial discourse to a matter of straightforward personal branding and social signaling. It turns my stomach.
As for me, I live in Europe, I am not terribly invested in social-media battles of the sort DeBoer seems to enjoy, and so I have only a passing familiarity with the phenomena at issue. How then do I spend my time? Well, when not wondering what the hell is wrong with my fellow Americans, I often find myself thinking about Russia: What is it? What were the historical forces that made it possible for Muscovy to rise to become the principal counterhegemonic force throughout the Pax Americana of the 20th century, and to reappear, some years into the 21st, as a significant player on the world scene?
And in this connection, I have begun to wonder whether this 'white' thing is not perhaps a symptom of a distinctly 'Atlanticist' world view, and whether it might not have somewhat less purchase when one instead looks at the world from a 'Eurasianist' perspective. These are of course the sinister Aleksandr Dugin's terms, and when I invoke them I do not mean to endorse them as true, but rather to make some progress toward understanding why the Russians in particular and the citizens of the former Soviet bloc in general constitute such a peculiar tertium quid in relation to the schemes for carving up of the basic human subkinds that are general currency among American bloggers: they don't see themselves in our Atlantic-centered racial categories, and that exclusion, that irrelevance of our grids, only makes them more estranged and hostile, less NATO-oid. The war in Europe that appears to be taking shape at present is going to be between groups of people Aaron Bady, say, would call 'white', but it's pretty clear that that designation doesn't mean much to at least one of the sides, and that there's a long, deep continental history that's being overlooked when Eurasians, and notably Russians, are thought of in these Atlanticizing terms.
Victoria Schlesinger in Aeon (Marbled Salamander. Photo by Michel Gunther/Biosphoto/Corbis):
Before there is a species, there’s a muddled period of innumerable changes as a group of individuals diverges, gene by gene, from their ancestors into a new species. The point at which those tiny changes add up to a separate species has been debated since the days of Aristotle. Further complicating matters, our basic litmus test for delineating species – viable offspring – is shaky at best. We know that when grizzly bears and polar bears mate, or coyote and wolf for that matter, the two species produce hybrid young – a combination individual that reflects some of the traits of each parent. It’s no wonder that roughly 26 concepts compete for the definition of species. Species are not so much a set of fixed traits but a temporary collection of them along a fluid continuum. The field guides belie variety within a species because it is so copious and ever-changing that you couldn’t get it on paper if you wanted to.
Scientists have long recognised the incredible diversity within a species. But they thought it reflected evolutionary changes that unfolded imperceptibly, over millions of years. That divergence between populations within a species was enforced, according to Ernst Mayr, the great evolutionary biologist of the 1940s, when a population was separated from the rest of the species by a mountain range or a desert, preventing breeding across the divide over geologic scales of time. Without the separation, gene flow was relentless. But as the separation persisted, the isolated population grew apart and speciation occurred.
In the mid-1960s, the biologist Paul Ehrlich – author of The Population Bomb (1968) – and his Stanford University colleague Peter Raven challenged Mayr’s ideas about speciation. They had studied checkerspot butterflies living in the Jasper Ridge Biological Preserve in California, and it soon became clear that they were not examining a single population. Through years of capturing, marking and then recapturing the butterflies, they were able to prove that within the population, spread over just 50 acres of suitable checkerspot habitat, there were three groups that rarely interacted despite their very close proximity.
Among other ideas, Ehrlich and Raven argued in a now classic paper from 1969 that gene flow was not as predictable and ubiquitous as Mayr and his cohort maintained, and thus evolutionary divergence between neighbouring groups in a population was probably common.
Nicolas Pelham in the NYRB:
Yet Sunni fears are not without basis. Ten days after Mosul’s capture, as ISISapproached Baghdad airport, Grand Ayatollah Ali al-Sistani, the Shiite spiritual leader based in Iraq’s Shia shrine city of Najaf, south of Baghdad, issued a call for jihad against ISIS and its Sunni allies. In their panic, cloistered and quietist Shia clerics who for a decade had struck pacifist poses turned into militant mullahs. The night I arrived in Najaf, a Qatari Shiite preacher, Nazar al-Qatari, had put on military fatigues to rally worshipers after evening prayers. All were obliged, he cried, to fight for Iran’s supreme leader Ayatollah Ali Khamenei, against “the slayers of Imams Hasan and Hussein”—i.e., great imams of Shia history—and join in what the clerics have dubbed the hashad shaabi, or popular mobilization.
To ward off the threat to Baghdad from the Sunni north, Shiite volunteers converged on its streets from the south. Baghdad’s public space feels overwhelmingly Shiite. Leaders of Shiite militias who had previously denounced al-Sistani’s vacillation now celebrated his de facto legalization of the militias’ advance. Abu Jaafar Darraji, a senior commander from the Badr Organization, the largest and most openly pro-Iranian of the militias, told me that not even Khamenei’s predecessor, Ayatollah Ruhollah Khomeini, had dared to declare such an open-ended jihad against a Sunni enemy. In the recruiting center he ran in Baghdad he had covered the walls with portraits of Ayatollah Khamenei and al-Sistani. The ones of al-Sistani had stencils of guns on them.
With a fresh supply of arms and training from Iran, Darraji claimed that his Badr militia could outgun the official Iraqi army and set up an alternative system of government. Pointing at Khamenei’s portrait, he said, “He’s the wali amr al-muslimeen, the legal ruler in all the Muslim lands.” Once the militia—the hashad—had accomplished its mission of vanquishing ISIS, it would, he said, be the Iraqi branch of Iran’s Basij, the zealous youth group of vigilantes Khomeini founded in 1979 to uphold his revolution and purge Iran of his enemies.
Iran’s presence, once a hidden force, has shed its camouflage. On billboards in the capital he struck with rockets during the war with Iraq of 1980–1988, Khomeini now can be seen holding a map of Iraq in his hand.
Sean Carroll at Edge:
There's an old creationist myth that says there’s a problem with the fact that we live in a universe governed by the second law of thermodynamics: Disorder, or entropy, grows with time from early times to later times. If that were true, how in the world could it be the case that here on Earth something complicated and organized like human beings came to be? There's a simple response to this, which is that the second law of thermodynamics says that things grow disorderly in closed systems, and the earth is not a closed system. We get energy in a low entropy form from the sun. We radiate it out in a high entropy form to the universe. But okay, there's still a question: even if it's allowed for a structure to form here on Earth, why did it? Why does that happen? Is that something natural? Is that something that needs to be guided or does it just happen?
In some sense this is a physics problem. I've become increasingly interested in how the underlying laws of physics, which are very simple and mindless and just push particles around according to equations, take us from the very simple early universe near the Big Bang after 10100 years to the expanding, desolate, cold and empty space in our future, passing through the current stage of the history of the universe where things are rich and intricate and complex.
We know there's a law of nature, the second law of thermodynamics, that says that disorderliness grows with time. Is there another law of nature that governs how complexity evolves? One that talks about multiple layers of the structures and how they interact with each other? Embarrassingly enough, we don't even know how to define this problem yet. We don't know the right quantitative description for complexity. This is very early days. This is Copernicus, not even Kepler, much less Galileo or Newton. This is guessing at the ways to think about these problems.
Morgan Meis in The Smart Set:
On a very clear day, blue sky, bright, bright sunlight, you’ll spy an amazing cloud. It is structured like a column. It is dense and white and billows upward, touching the outer limits of the firmament, seemingly. Probably it goes up only a few hundred feet. But the verticality of the cloud is what makes it so inspiring. Just going right up there. Up into the heavens over semi-rural Pennsylvania.
How did this cloud get here, in such an otherwise empty, blue sky? It is a miracle.
They started building the Limerick nuclear power plant in 1974 and it was officially commissioned in 1986. Officially, the plant is called The Limerick Generating Station. Limerick. The name comes from the town. The town is not really a town. Or, at least, I’ve never seen the center of it. There is no locality to the town, just signs as you drive around saying that you’re in Limerick or no longer in Limerick. There is a small, regional airport in Limerick and an outlet shopping center.
Truth be told, the town of Limerick is basically the nuclear power plant now. That is the center. You can see the steam cloud rising over the cooling towers from miles away. It is a useful point of orientation when driving around the windy roads that double back on themselves half of the time.
John Updike always wrote beautifully about this part of the world. The middle class houses. The certain kind of red clay. The specific attitude of a person who grew up around here, in the vicinity of Reading.
Heather Schwedel in Salon:
These words recently greeted me from a chalkboard sign at a bar a few blocks away from my apartment. The sheer cheekiness nearly knocked me over. If I’d been about to enter that bar, I might have turned on my heels and walked away. The commercialism mixed with annoying solicitousness mixed with elbow-in-ribcage jokiness—it all felt so familiar. When did bar and café chalkboards start reading like some kind of cross between a pick-up line, “neg,” and Internet meme?
Long after the printing press rendered town criers obsolete, that other ancient form of information dissemination, the sidewalk sandwich board, quietly persists. Sometimes these chalkboards—you can find them standing outside certain not-corporate-and-proud-of-it businesses like bars, coffee shops, and boutiques—list the day’s specials or when happy hour is. But perhaps you too have lately noticed a certain creep away from the practical toward a softer sell: jokes, puns, quotations, drawings, and other creative expressions of branding. Too often, the results are cringeworthy...
Wondering if I was the only crank who found these signs aggressively unnecessary, I took to the Internet in search of sympathizers. I found plenty. “I think what irks me in general about these signs is just the overfamiliarity,” emailed Chiara Atik, a playwright and writer who has tweeted her ire for these signs. “Like I just want a coffee, not some timely allusion to last night’s Game of Thrones.” The strategy of attracting attention through clever signage may even be backfiring, resulting not in additional business but eye rolls. (From me anyway. I acknowledge the possibility that some people read these signs, laugh heartily, and happily hand over their dollars.)
Read the rest here.
Brian Gallagher in Nautilus:
In the 1960s, the English psychologist Peter Wason devised an experiment that would revolutionize his field. This clever puzzle, known as the “Wason selection task,” is often claimed to be “the single most investigated experimental paradigm in the psychology of reasoning,” in the words of one textbook author. Wason was a funny and clever man and an idiosyncratic thinker. His great insight was to treat reasoning as an enigma, something to scrutinize both critically and playfully. He told his colleagues, for instance, that he would familiarize himself with their work only after doing his own experiments, so as not to bias his own mind. He also said that before running experiments, researchers—quixotically—should never really know exactly why they were doing them. “The purpose of his experiments was not usually to test a hypothesis or theory, but rather to explore the nature of thinking,” a pair of his students wrote in Wason’s obituary. (He died in 2003.) “His aim was to reveal a surprising phenomenon—to show that thinking was not what psychologists including himself had taken it to be.”
The groundbreaking nature of Wason’s selection task may have been a result of his unconventional style. In one version of the task, one subject (always one—he spurned testing subjects in groups) is presented with four cards lying flat on a table, each with a single-digit number on one face and one of two colors on the other. Let’s imagine that you’re Wason’s subject. The first and second cards you see are a five and an eight; the third and fourth cards are blue and green, respectively. Wason liked to chat with his subjects, but he probably didn’t tell them that this logical puzzle was “deceptively easy,” which was how he described it in the paper he would later write, in 1968. Wason tells you that if a card shows an even number on one face, then its opposite face is blue. Which cards must you turn over in order to test the truth of his proposition, without turning over any unnecessary cards? Click on your answer in the interactive video below:
Stem-cell scientists at McMaster University have developed a way to directly convert adult human blood cells to sensory neurons, providing the first objective measure of how patients may feel things like pain, temperature, and pressure, the researchers reveal in an open-access paper in the journal Cell Reports. Currently, scientists and physicians have a limited understanding of the complex issue of pain and how to treat it. “The problem is that unlike blood, a skin sample or even a tissue biopsy, you can’t take a piece of a patient’s neural system,” said Mick Bhatia, director of the McMaster Stem Cell and Cancer Research Institute and research team leader. “It runs like complex wiring throughout the body and portions cannot be sampled for study. “Now we can take easy to obtain blood samples, and make the main cell types of neurological systems in a dish that is specialized for each patient,” said Bhatia. “We can actually take a patient’s blood sample, as routinely performed in a doctor’s office, and with it we can produce one million sensory neurons, [which] make up the peripheral nerves. We can also make central nervous system cells.”
Testing pain drugs
The new technology has “broad and immediate applications,” said Bhatia: It allows researchers to understand disease and improve treatments by asking questions such as: Why is it that certain people feel pain versus numbness? Is this something genetic? Can the neuropathy that diabetic patients experience be mimicked in a dish? It also paves the way for the discovery of new pain drugs that don’t just numb the perception of pain. Bhatia said non-specific opioids used for decades are still being used today. “If I was a patient and I was feeling pain or experiencing neuropathy, the prized pain drug for me would target the peripheral nervous system neurons, but do nothing to the central nervous system, thus avoiding addictive drug side effects,” said Bhatia.
Tuesday, May 26, 2015
Vivek Menezes in National Geographic:
It was Kashimiri poetry that sparked the idea of a family summer holiday in Srinagar. I encountered Ranjit Hoskote’s I, Lalla—The Poems of Lal Ded in 2011, and was instantly hooked by the power packed in the four-line vakhs. Lal Ded, an unusual 14th-century female Kashmiri mystic and poet, inhabited a “Hindu-Buddhist universe of meaning,” as Hoskote puts it, while simultaneously drawing on Persian, Arabic, and Sufi philosophy. Similarly, deeply rooted syncretism is part of my Goan heritage, and Lal Ded’s poems touched a personal chord. Before long, I became obsessed with the idea of an extended visit to Kashmir to learn more about the cultural roots that yielded this intriguing poetry.
When my wife, three young sons, and I finally arrived in Srinagar the following summer, we discovered Lal Ded’s poems are truly the bedrock to Kashmir’s many-layered identity. Favourite vakhs were recited to us proudly by schoolchildren and kebab-sellers; by the gate-keeper who ushered us through the wood-and-brick shrine dedicated to Naqshband Sahib, a 17th century mystic who came to Kashmir from Bukhara; and also by the young man with wildly curly hair who piloted us through Dal Lake’s floating tomato plantations.
The heartfelt verses of Lal Ded are an important part of Kashmir’s living regional tradition, where Shaivism flows into Sufism through the unique “Muslim Rishis”. We found this richly confluent identity—Kashmiriyat—shining brightly on our very first night in Srinagar, when we attended a moonlit bhand pather performance as part of the Dara Shikoh festival hosted at Almond Villa, on the shores of Dal Lake. Directed by one of India’s best-known theatre directors, M.K. Raina, the folk troupe poked exuberant fun at the hypocrisies of religion.
The obsession with eating natural and artisanal is ahistorical, we should demand more high-quality industrial food
Rachel Laudan in Jacobin:
It is a mark of sophistication to bemoan the steel roller mill and supermarket bread while yearning for stone ground flour and brick ovens; to seek out heirloom apples and pumpkins while despising modern tomatoes and hybrid corn; to be hostile to agronomists who develop high-yielding modern crops and to home economists who invent new recipes for General Mills.
We hover between ridicule and shame when we remember how our mothers and grandmothers enthusiastically embraced canned and frozen foods. We nod in agreement when the waiter proclaims that the restaurant showcases the freshest local produce. We shun Wonder Bread and Coca-Cola. Above all, we loathe the great culminating symbol of Culinary Modernism, McDonald’s — modern, fast, homogenous, and international.
Like so many of my generation, my culinary style was created by those who scorned industrialized food; Culinary Luddites, we may call them, after the English hand workers of the nineteenth century who abhorred the machines that were destroying their traditional way of life. I learned to cook from the books of Elizabeth David, who urged us to sweep our store cupboards “clean for ever of the cluttering debris of commercial sauce bottles and all synthetic flavorings.”
It was the 18th-century scientist Carolus Linnaeus that laid the foundations for modern biological taxonomy. It was also Linnaeus who argued for the existence of Homo troglodytes, a primitive people said to inhabit the caves of an Indonesian archipelago. Although troglodyte1 has since been proven to be an invalid taxon, archaeological doctrine continued to describe our ancestors as cavemen. The idea fits with a particular narrative of human evolution, one that describes a steady march from the primitive to the complex: Humans descended from the trees, stumbled about the land, made homes in caves, and finally found glory in high-rises. In this narrative, progress includes living inside confined physical spaces. This thinking was especially prevalent in Western Europe, where caves yielded so much in the way of art and artifacts that archaeologists became convinced that a cave was also a home, in the modern sense of the word.
By the 1980s, archaeologists understood that this picture was incomplete: The cave was far from being the primary residence. But archaeologists continued focusing on excavating caves, both because it was habitual and the techniques involved were well understood.
Then along came the American anthropological archaeologist, Margaret Conkey. Today a professor emerita at the University of California, Berkeley, she had asked a simple question: What did cave people do all day? What if she looked at the archaeological record from the perspective of a mobile culture, like the Inuit? She decided to look outside of caves.
Each of the three monotheistic religions, commonly referred to as ‘Abrahamic’, has its own affirmation of faith, a single statement held to be fundamental by its adherents.
In Judaism, such a proclamation is Shema (Listen), drawn from Deuteronomy 6:4. It reads: “Listen, O Israel: The Lord is our God, the Lord is One!” Observant Jews must recite Shemadaily—for instance, before falling asleep—and it is supposed to be the last thing they utter before dying. Even in the most private nocturnal moments and on the deathbed, Shemaannounces monotheistic creed, in the imperative, to the religious community, united around “our God” who is “One.”
Christianity, too, has its dogma going back to the Apostles’ Creed, dating to the year 150. Still read during the baptismal ritual, the statement of faith begins with the Latin wordCredo, “I believe” and continues “…in the all-powerful God the Father, Creator of heavens and earth, and in Jesus Christ, His only Son, our Lord, conceived by the Holy Spirit, born of the Virgin Mary…” Credo individualizes the believer; not only does it start with a verb in the first person singular, but it also crafts her or his identity through this very affirmation. While the Judaic Shema forges a community through a direct appeal to others, the Christian profession of faith self-referentially produces the individual subject of that faith.
The declaration of Islamic creed is called Shahada, “Testimony.” In contrast to its other monotheistic counterparts, however, it commences with a negation.
WHY DO THESE PEOPLE need so much water? The answer, in large part, is corn. In the 19th century, cattle raised on the plains were shipped off to Chicago for slaughter, but over time meatpacking moved progressively closer to the cow. The stockyards grew so huge that their size became inefficient. Improvements in the railroads and, later, the advent of the semitruck made it cheap to transport meat without a central site of production. Decentralization also enabled management to escape Chicago’s strong labor movement. The industry is now dispersed across dozens of small plains cities: Dodge City and Garden City on the Arkansas in Kansas, and Liberal, which isn’t far, as well as Greeley, Colorado, and Grand Island, Nebraska, along the Platte. Each city and its small hinterland is a vertically integrated unit for producing beef, and corn is the cheapest means to fatten cattle before they are sent to the slaughterhouse. Consequently, many plains farmers now grow corn instead of dryland crops like wheat. But corn is water hungry and must have twenty inches of rainfall a year to survive and at least forty to thrive. Only one of the corn-growing counties along the upper Arkansas receives twenty inches of rain a year, and some places are so dry that they are, both technically and in outward appearance, deserts. Although corn is manifestly unsuited to the climate, it is grown in enormous volumes, and irrigation is what allows this to continue.
Mark Kukis in Aeon:
Since the early 1980s, conflicts have generally become more fragmented, meaning they involve more than two warring parties. The spread of internal conflicts has led outside nations to become more involved, which tends to prolong hostilities. In the 1990s, few internal conflicts drew outside powers. By 2010, almost 27 per cent of internal wars entangled outside nations. The causes of these fragmented internal conflicts are complex, varying from region to region. In parts of Africa, especially parts of West Africa in the 1990s, diamonds and other easily looted resources have helped drive conflict. In other parts of Africa, such as the eastern edge of the DRC, disease and environmental degradation have shaped regional fighting. An unrelenting appetite for narcotics in the US has stoked violence in many Latin American countries. Globally, a booming arms trade has helped give rise to Kalashnikov politics, ie politics practised with either an overt or implied threat of armed violence by competing factions. For the world’s aggrieved and malcontent, making war is easier than ever; making politics more violent and dangerous. So when the US goes to war today, it typically becomes a party to internal conflict instead of a combatant against another country.
Military triumphs against other nations – for example Iraq in 2003 – offer only fleeting victories and serve as preludes to the actual war. In these internal, fragmented conflicts, victory is elusive for any party involved...Statistically, the odds of the US coming up a winner in a modern war are perhaps as low as one in seven.
Superpowers and hegemons are also winning less frequently these days than they once did. From 1900 to 1949, strong militaries fighting conventionally weaker forces won victories about 65 per cent of the time. From 1950 to 1998, advantaged military powers claimed war victories only 45 per cent of the time. In the first part of the 19th century, superior powers won wars almost 90 per cent of the time. For hundreds of years, nations with the will and the means to raise strong militaries have wagered that the extraordinary investment of time, treasure and lives would yield rewards in war when the moment came. For hundreds of years, that was a safe bet – but not any more. For 21st-century superpowers, war is no longer likely to be a winning endeavour.
Read the rest here.
Matt Schiavenza in The Atlantic:
John Nash, a Nobel laureate and mathematical genius whose struggle with mental illness was documented in the Oscar-winning film A Beautiful Mind, was killed in a car accident on Saturday. He was 86. The accident, which occurred when the taxi Nash was traveling in collided with another car on the New Jersey Turnpike, also claimed the life of his 82-year-old wife, Alicia. Neither of the two drivers involved in the accident sustained life-threatening injuries. Born in West Virginia in 1928, Nash displayed an acuity for mathematics early in life, independently proving Fermat’s little theorem before graduating from high school. By the time he turned 30 in 1958, he was a bona fide academic celebrity. At Princeton, Nash published a 27-page thesis that upended the field of game theory and led to applications in economics, international politics, and evolutionary biology. His signature solution—known as a “Nash Equilibrium”—found that competition among two opponents is not necessarily governed by zero-sum logic. Two opponents can, for instance, each achieve their maximum objectives through cooperating with the other, or gain nothing at all by refusing to cooperate. This intuitive, deceptively simple understanding is now regarded as one of the most important social science ideas in the 20th century, and a testament to his almost singular intellectual gifts.But in the late 1950s, Nash began a slide into mental illness—later diagnosed as schizophrenia—that would cost him his marriage, derail his career, and plague him with powerful delusions. Nash believed at various times that he was the biblical figure Job, a Japanese shogun, and a “messianic figure of great but secret importance.” He obsessed with numbers and believed the New York Times published coded messages from extraterrestrials that only he could read.
Mental institutions and electroshock therapy failed to cure him, and for much of the next three decades, Nash wandered freely on the Princeton campus, scribbling idly on empty blackboards and staring blankly ahead in the library.
Natalie Angier in The New York Times:
One of the biggest mistakes my husband made as a new father was to tell me he thought his diaper-changing technique was better than mine. From then on, guess who assumed the lion’s share of diaper patrol in our household? Or rather, the northern flicker’s share. According to a new report in the journal Animal Behaviour on the sanitation habits of these tawny, 12-inch woodpeckers with downcurving bills, male flickers are more industrious housekeepers than their mates. Researchers already knew that flickers, like many woodpeckers, are a so-called sex role reversed species, the fathers spending comparatively more time incubating the eggs and feeding the young than do the mothers. Now scientists have found that the males’ parental zeal also extends to the less sentimental realm of nest hygiene: When a chick makes waste, Dad, more readily than Mom, is the one who makes haste, plucking up the unwanted presentation and disposing of it far from home.
Researchers have identified honeybee undertakers that specialize in removing corpses from the hive, and they have located dedicated underground toilet chambers to which African mole rats reliably repair to perform their elaborate ablutions. Among chimpanzees, hygiene often serves as a major driver of cultural evolution, and primatologists have found that different populations of the ape are marked by distinctive grooming styles. The chimpanzees in the Tai Forest of Ivory Coast, for example, will extract a tick or other parasite from a companion’s fur with their fingers and then squash the offending pest against their own forearms. Chimpanzees in the Budongo Forest of Uganda prefer to daintily place the fruits of grooming on a leaf for inspection, to decide whether the dislodged bloodsuckers are safe to eat, or should simply be smashed and tossed. Budongo males, those fastidious charmers, will also use leaves as “napkins,” to wipe their penises clean after sex.
No Snow Fell On Eden
so no thaw or tao as you say
no snowmelt drooled down the brae;
no human footfall swelled into that of a yeti
baring what it shoulda kept hidden;
no yellow ice choked bogbean;
there were no sheepskulls
in the midden –
it was no allotment, eden –
they had a hothouse,
an orangery, a mumbling monkey;
there was no cabbage-patch
of rich, roseate heads;
there was no innuendo
no sea, no snow
There was nothing funny
about a steaming bing of new manure.
There was nothing funny at all.
Black was not so sooty. No fishboat revolved redly
on an eyepopping sea.
Eve never sat up late drinking and crying.
Adam knew no-one who was dying.
That was yet to come, In The Beginning.
from Poetry International Web, 2015
Monday, May 25, 2015
In Oslo on May 19 John Nash and Louis Nirenberg received the 2015 Abel Prize "for striking and seminal contributions to the theory of nonlinear partial differential equations and its applications to geometric analysis". The Abel Prize is barely a decade old but has quickly became one of the most prestigious awards in mathematics. To learn more about this year's winners, visit the Abel Prize webpage here. For an insight into the personalities of the two winners, I especially recommend these short videos.
This year's prize comes with sad news. On their way home from the award ceremony, John and Alicia Nash were killed in an auto accident. You can read the New York Times obituary here.
Last year at 3QD we talked about Yakov Sinai's work in dynamical systems. By coincidence this year's winners' work is closely related to the "exotic" non-Euclidean geometries we discussed at 3QD in March. It's a good chance to dig a little deeper into these topics and get the flavor of Nash and Nuremberg's work. Like last year I should say straight off that I'm not an expert, but I'm happy to talk about some cool mathematics.
John Nash, of course, is one of the most widely known mathematicians of the twentieth century. His life story was told by Sylvia Nasar in "A Beautiful Mind". The book was made into an award-winning film of the same name starring Russell Crowe. It tells of Nash's brilliant work as a young man and his subsequent difficulties with mental health issues. It's a dramatic story and well worth watching the film. It should go without saying, but the movie turns the drama knob up to eleven and shouldn't be taken as an accurate depiction of Nash's life. For a more nuanced version of events I recommend Nasar's book.
The movie closes with John Nash winning the Nobel prize in Economics for his work in game theory. In game theory we use mathematics to study potential strategies, outcomes, etc., when two or more players are in competition. If you only think about tic-tac-toe, chess, and other such games it first it sounds like a mathematical trifle. But once you begin to look around you see players in competition everywhere: people and corporations in the marketplace, countries in geopolitics, species in evolutionary competition, etc. Game theory is serious business!