Friday, May 29, 2015
Zack Beauchamp in Vox:
A key part of Pinker's work is the notion of the "long peace" — an idea that Pinker actually borrows from a historian, John Lewis Gaddis. It refers to the fact that in the past 70 years, wars between great powers have basically gone away. Because situations like the Cold War never escalated to direct conflict, we've managed to avoid the type of warfare that devastated societies in the early 20th century and, indeed, much of human history.
If the causes of that are, as Pinker suggested in a lecture, "the pacifying forces" of "democracy, trade, and international society," then we should expect this trend to continue. So long as we continue to maintain the trends of the world we live in, including growing international trade, strengthening of international institutions like the UN, and strong diplomatic ties between democratic states, then we might actually be able to keep making the world a better place.
Enter NYU professor Nassim Nicholas Taleb, who is best known as the author of The Black Swan, a book on rare events. He thinks all of this is starry-eyed nonsense. In his opinion, proponents of the "war is declining" argument are over-interpreting evidence of a good trend in the same way people used to argue that the stock market could go up forever without crashes. He wrote a stinging critique of Pinker's work, which Pinker replied to, and then Taleb replied to again.
Taleb's new paper, co-authored with Delft University's Pasquale Cirillo, is the latest volley in that ongoing intellectual war. It's probably the most statistically sophisticated argument to date that war isn't declining — and that we're still every bit at risk of a major conflict as we always were.
Redditt Hudson in Vox:
On any given day, in any police department in the nation, 15 percent of officers will do the right thing no matter what is happening. Fifteen percent of officers will abuse their authority at every opportunity. The remaining 70 percent could go either way depending on whom they are working with.
That's a theory from my friend K.L. Williams, who has trained thousands of officers around the country in use of force. Based on what I experienced as a black man serving in the St. Louis Police Department for five years, I agree with him. I worked with men and women who became cops for all the right reasons — they really wanted to help make their communities better. And I worked with people like the president of my police academy class, who sent out an email after President Obama won the 2008 election that included the statement, "I can't believe I live in a country full of ni**er lovers!!!!!!!!" He patrolled the streets in St. Louis in a number of black communities with the authority to act under the color of law.
That remaining 70 percent of officers are highly susceptible to the culture in a given department. In the absence of any real effort to challenge department cultures, they become part of the problem. If their command ranks are racist or allow institutional racism to persist, or if a number of officers in their department are racist, they may end up doing terrible things.
It is not only white officers who abuse their authority. The effect of institutional racism is such that no matter what color the officer abusing the citizen is, in the vast majority of those cases of abuse that citizen will be black or brown. That is what is allowed.
And no matter what an officer has done to a black person, that officer can always cover himself in the running narrative of heroism, risk, and sacrifice that is available to a uniformed police officer by virtue of simply reporting for duty. Cleveland police officer Michael Brelo was recently acquitted of all charges against him in the shooting deaths of Timothy Russell and Malissa Williams, both black and unarmed. Thirteen Cleveland police officers fired 137 shots at them. Brelo, having reloaded at some point during the shooting, fired 49 of the 137 shots. He took his final 15 shots at them after all the other officers stopped firing (122 shots at that point) and, "fearing for his life," he jumped onto the hood of the car and shot 15 times through the windshield.
Laura Kipnis on the backlash to her earlier piece, in The Chronicle of Higher Education Chronicle Review illustration by Scott Seymour):
When I first heard that students at my university had staged a protest over an essay I’d written in The Chronicle Review about sexual politics on campus — and that they were carrying mattresses and pillows — I was a bit nonplussed. For one thing, mattresses had become a symbol of student-on-student sexual-assault allegations, and I’d been writing about the new consensual-relations codes governing professor-student dating. Also, I’d been writing as a feminist. And I hadn’t sexually assaulted anyone. The whole thing seemed symbolically incoherent.
According to our campus newspaper, the mattress-carriers were marching to the university president’s office with a petition demanding "a swift, official condemnation" of my article. One student said she’d had a "very visceral reaction" to the essay; another called it "terrifying." I’d argued that the new codes infantilized students while vastly increasing the power of university administrators over all our lives, and here were students demanding to be protected by university higher-ups from the affront of someone’s ideas, which seemed to prove my point.
The president announced that he’d consider the petition.
Still, I assumed that academic freedom would prevail. I also sensed the students weren’t going to come off well in the court of public opinion, which proved to be the case; mocking tweets were soon pouring in. Marching against a published article wasn’t a good optic — it smacked of book burning, something Americans generally oppose. Indeed, I was getting a lot of love on social media from all ends of the political spectrum, though one of the anti-PC brigade did suggest that, as a leftist, I should realize these students were my own evil spawn. (Yes, I was spending a lot more time online than I should have.)
Being protested had its gratifying side — I soon realized that my writer friends were jealous that I’d gotten marched on and they hadn’t. I found myself shamelessly dropping it into conversation whenever possible. "Oh, students are marching against this thing I wrote," I’d grimace, in response to anyone’s "How are you?" I briefly fantasized about running for the board of PEN, the international writers’ organization devoted to protecting free expression.
Things seemed less amusing when I received an email from my university’s Title IX coordinator informing me that two students had filed Title IX complaints against me on the basis of the essay and "subsequent public statements" (which turned out to be a tweet), and that the university would retain an outside investigator to handle the complaints.
I stared at the email, which was under-explanatory in the extreme. I was being charged with retaliation, it said, though it failed to explain how an essay that mentioned no one by name could be construed as retaliatory, or how a publication fell under the province of Title IX, which, as I understood it, dealt with sexual misconduct and gender discrimination.
Carl Zimmer in his excellent blog, The Loom:
Scientists have observed adoption in occurring 120 species of mammals. Other species that are harder to study may be adopting, too. As for kangaroos, scientists have long known that if they put a joey in an unrelated female’s pouch, she will sometimes keep it. But King and her colleagues have now discovered that kangaroos will voluntarily adopt joeys in the wild. All told, they found that 11 of the 326 juveniles were adopted over their five-year study–a rate of about three percent. Given the commitment adoption demands from a mammal mother–a kangaroo mother needs a full year to raise a single joey to weaning–this discovery cries out for an explanation.
Over the years, researchers have proposed a number of different explanations for adoption. Some have suggested that mammals adopt young offspring of their relatives because they are genetically similar. By rearing the offspring of their kin, this argument goes, adoptive parents can ensure that some of their own genes get passed down to future generations.
According to another explanation, unrelated adults may adopt each other’s young because this kind of quid-pro-quo benefits everyone involved. And according to a third explanation, young adults adopt orphaned juveniles as a kind of apprenticeship. They learn some important lessons about how to raise young animals, which they can apply later to raising their own offspring.
These explanations share something in common. They all take adoption to have an evolutionary benefit. In the long run, the genes that make animals willing to adopt become more common thanks to natural selection.
But in the case of kangaroos–and perhaps other species, too–evolution may have instead have made a mess of things. Adoption may not be an adaptation. It may be a maladaptation.
Vanessa Grigoriadis in the New York Times:
A few pages into “Primates of Park Avenue,” I raised an eyebrow as high as a McDonald’s arch. Was Wednesday Martin, a Midwestern-born Ph.D., trying to explain the rites of the Upper East Side to me, an autochthonous Manhattanite schooled at one of the neighborhood’s top “learning huts”? She was a late transfer to the New York troop — a particularly vicious troop, at that — and it’s a weak position to be in throughout the primate kingdom, whether human or monkey.
I underestimated Martin with few repercussions, but the SoulCycled, estrogen-dimmed and ravenously hungry young mothers who similarly exhibited New York’s inbred superciliousness have done so at their peril, because now she’s gone and told the world their tricks. “I was afraid to write this book,” Martin confesses, but I guess she got over it. Instead, she obsessively deconstructs the ways of her new tribe, from the obvious — “No one was fat. No one was ugly. No one was poor. Everyone was drinking” — to the equally obvious but narratively rich: “It is a game among a certain set to incite the envy of other women.”
The result is an amusing, perceptive and, at times, thrillingly evil takedown of upper-class culture by an outsider with a front-row seat. The price of the ticket, a newly purchased Park Avenue condop in the 70s with a closet designated exclusively for her handbags, wisely goes unmentioned, the better to establish rapport with readers in Des Moines.
Brit Bennett in Paris Review:
In 1864, a nine-year-old slave girl was punished for daydreaming. Distracted by rumors that her brother and father would be sold, she failed to remove worms from the tobacco leaves she was picking. The overseer didn’t whip her. Instead, he pried her mouth open, stuffed a worm inside, and forced her to eat it. This girl is not real. Her name is Addy Walker; she is an American Girl doll, one of eight historical dolls produced by the Pleasant Company who arrive with dresses, accessories, and a series of books about their lives. Of all the harrowing scenes I’ve encountered in slave narratives, I remember this scene from Meet Addy, her origin story, most vividly. How the worm—green, fat, and juicy—burst inside Addy’s mouth. At eight years old, I understood that slavery was cruel—I knew about hard labor and whippings—but the idea of a little girl being forced to eat a worm stunned me. I did not yet understand that violence is an art. There’s creativity to cruelty. What did I know of its boundaries and edges?
An American Girl store is designed like a little girl’s fantasyland, or what the Pleasant Company, owned by Mattel, imagines that to be. Pink glows from the walls; yellow shelves hold delicate dolls in display cases. Nurses tend to a hospital for defunct toys and a café hosts tea parties for girls and their dolls. The company has retired many of the historical American Girls from my childhood—the colonist Felicity, the frontierswoman Kirsten, and the World War II–era Molly, all among the original set of dolls, released in 1986—but Addy remains. Against the store’s backdrop of pink tea parties, her story seems even more harrowing. Addy escapes to the north with her mother, forced to leave her baby sister behind because her cries might alert slave-catchers. In Philadelphia, Addy struggles to adjust and dreams of her family reuniting. They do, it turns out, find each other eventually—a near impossibility for an actual enslaved family—but at no small cost. Her brother loses an arm fighting in the Civil War. Her surrogate grandparents die on the plantation before she can say goodbye. Other American Girls struggle, but Addy’s story is distinctly more traumatic. For seventeen years, Addy was the only black historical doll; she was the only nonwhite doll until 1998. If you were a white girl who wanted a historical doll who looked like you, you could imagine yourself in Samantha’s Victorian home or with Kirsten, weathering life on the prairie. If you were a black girl, you could only picture yourself as a runaway slave.
Christof Koch in Nature:
By and large, we watch movies to be entertained, not to be provoked into deep thought. Occasionally, a film does both. This year’s Ex Machina is one such gem. It prompted me to reflect upon the evolution of the idea of machine sentience over the past three decades of science fiction on film. I am a long-time student of the mind-body problem — how consciousness arises from the brain. There is a conundrum at the heart of this ancient dilemma, challenging both brain science and AI; and it is well captured by Ex Machina and two other SF movies. In essence, it lies in how we can ever be certain that a machine feels anything, is conscious.
...Enter Ex Machina, directed by Alex Garland. This intelligent and thoughtful mix of psycho-drama and SF thriller centers on a strange ménage à trois. Ava is a beauty with a difference (a phenomenal performance by Alicia Vikander); Caleb is a nerdy young programmer (Domhnall Gleeson); Nathan is a beastly, brilliant inventor and immensely rich tech-entrepreneur (Oscar Isaac). Caleb is selected by Nathan, a recluse, to spend a week at his live-in Arctic laboratory. He introduces Caleb to Ava, an advanced cyborg whose semi-transparent skull and body reveal inner workings, including a brain that is quasi-organic in some unspecified way. It’s a twist on Blade Runner: if Caleb interacts with Ava as he would with an alluring woman – while seeing clearly that she is not flesh and blood – that would testify to Ava’s ability to convince him she has real feelings. Ava and Caleb hit it off at first sight. Unlike Her, Ex Machina soon becomes a game of smoke and mirrors. Ava hints to Caleb that she doubts Nathan’s purely scientific motives; there are bizarre scenes such as Nathan doing a synchronized dance routine with a mute servant. Nathan’s lab becomes Bluebeard’s Castle, complete with locked rooms and heavy psychosexual undertones. Ex Machina’s ending, invoking the trope of the femme fatale, is logical, surprising and darker than Blade Runner’s. All three films showcase how the psychology of desire can be exploited to forge a powerful empathic response in their protagonists, sweeping away doubts about the object of their longing having sentience. It’s a Turing test based on lust, each movie an excursion into human social psychology and the attendant gender power politics.
Manhood: Badly educated men in rich countries have not adapted well to trade, technology or feminism
In The Economist:
For those at the top, James Brown’s observation that it is a man’s, man’s, man’s world still holds true. Some 95% of Fortune 500 CEOs are male, as are 98% of the self-made billionaires on the Forbes rich list and 93% of the world’s heads of government. In popular films fewer than a third of the characters who speak are women, and more than three-quarters of the protagonists are men. Yet the fact that the highest rungs have male feet all over them is scant comfort for the men at the bottom.
Technology and trade mean that rich countries have less use than they once did for workers who mainly offer muscle. A mechanical digger can replace dozens of men with spades; a Chinese steelworker is cheaper than an American. Men still dominate risky occupations such as roofer and taxi-driver, and jobs that require long stints away from home, such as trucker and oil-rig worker. And, other things being equal, dirty, dangerous and inconvenient jobs pay better than safe, clean ones. But the real money is in brain work, and here many men are lagging behind. Women outnumber them on university campuses in every region bar South Asia and sub-Saharan Africa. In the OECD men earn only 42% of degrees. Teenage boys in rich countries are 50% more likely than girls to flunk all three basic subjects in school: maths, reading and science.
The economic marginalisation this brings erodes family life. Women who enjoy much greater economic autonomy than their grandmothers did can afford to be correspondingly pickier about spouses, and they are not thrilled by husbands who are just another mouth to feed.
If the sort of labour that a man like Mr Redden might willingly perform with diligence and pride is no longer in great demand, that does not mean there are no jobs at all. Everywhere you look in Tallulah there are women working: in the motels that cater to passing truckers, in the restaurants that serve all-you-can-eat catfish buffets, in shops, clinics and local government offices. But though unskilled men might do some of those jobs, they are unlikely to want them or to be picked for them.
Read the rest here.
I gave birth to an incredibly beautiful daughter, her teeth,
her hair as though from the Song of Songs. And I
felt beautiful myself, thank you. Whereas she –
that's a completely different beauty,
that's beauty I want to protect.
If I had some sort of beauty I'd blush,
anyhow I probably do have some, guys
wouldn't chase after me as much if I didn't,
but I don't like my beauty, because guys
chase after it. My daughter’s beauty
is something else. My daughter’s beauty, I believe,
is the only hope
for this world.
by Justyna Bargielska
from Bach for my baby
publisher: Biuro Literackie, Wrocław, 2013
translation: Maria Jastrzębska
from: Versopolis.com, 2015
Thursday, May 28, 2015
Even the biographers, watching the life ‘start at zero’, have struggled to establish where the motivation for the inventiveness came from. The most popular hypothesis, not least because Hitchcock himself promoted it so vigorously, concerns timidity. ‘The man who excels at filming fear is himself a very fearful person,’ Truffaut observed, ‘and I suspect that this trait of his personality has a direct bearing on his success.’ The most substantial biography to date, by Patrick McGilligan, includes plenty of anecdotes about fear, but supplies little by way of evidence of its ultimate cause, and draws no conclusions. Peter Ackroyd, however, is firmly of the Truffaut school. His Hitchcock trembles from the outset: ‘Fear fell upon him in early life.’ At the age of four (or 11, or …), his father had him locked up for a few minutes in a police cell, an episode that became, as Michael Wood puts it, the ‘myth of origin’ for his powerful distrust of authority. Ackroyd rummages dutifully for further evidence. Was young Alfred beaten at school by a ‘black-robed Jesuit’? Or caught out in the open when the Zeppelins raided London in 1915? Did he read too much Edgar Allan Poe? It doesn’t really add up to very much. And yet – or therefore – the strong conviction persists. Fear is the key; and not just to the life. Interview the films, he once told an inquisitive journalist. Those who have interviewed the films often conclude that, like their creator, they too tremble. ‘Hitchcock was a frightened man,’ Wood writes, ‘who got his fears to work for him on film.’
Eileen Chang started writing early, completing her first novel at the age of 12. By the time the Communist government came to power, Eileen Chang was a well-known writer in China (now considered by many to be China’s first modernist). Dubious about her role in this new society, however, Chang chose self-imposed exile. She moved to Hong Kong, then Japan, then back to Hong Kong, and eventually to Los Angeles, where she died alone in her apartment in 1995. Chang never again returned to mainland China. In Hong Kong, America became Eileen Chang’s patron. For three years, she worked as a translator for the United States Information Service. Then the USIS hired Chang to write anti-Communist propaganda in the form of two novels: The Rice Sprout Song and Naked Earth. Wanting propaganda, the Information Service encouraged Chang to be unsparing in her depiction of China’s confessional spectacles. And so she was.
In another early scene from Naked Earth, a Mass Meeting is held in the vacant lot in front of an ancestral temple. Tang, a Middling Farmer (neither unfortunately rich nor blessedly poor), is brought to a platform for his session. Schoolchildren wave paper flags and sing loudly. They are accompanied by militiamen, members of the Farm Workers Association, the Women’s Association, the Youth Vanguard Corps.
John Ashbery’s latest book of poems—his twenty-sixth, not counting various compilations and re-issues—is “Breezeway” (Ecco). As with most of Ashbery’s work, its medium is composed partly of language foraged from everyday American speech. The effect is sometimes unnerving, as though somebody had given you your own garbage back as a gift, cheerfully wrapped. Ashbery is nearly eighty-eight; more than ever, his style is a net for the weirdest linguistic flotsam. Few others of his generation would think to put “lemon telenovela” or “texasburger” in a poem, or write these lines: “Thanks / to a snakeskin toupee, my grayish push boots / exhale new patina / prestige. Exeunt the Kardashians.” He has gone farther from literature within literature than any poet alive. His game is to make an intentionally frivolous style express the full range of human feeling, and he remains funnier and better at it, a game he invented, than his many imitators.
It’s common for people to prefer a prior Ashbery, though few can agree on which one. There is the noncompliant poet of “The Tennis Court Oath,” his 1962 book, giddy in his defiance of meaning; the poet of childhood and its longueurs whom we encounter in his seven-hundred-and-thirty-nine-line poem, “The Skaters” (1966); the sublime meditative poet of “Self-Portrait in a Convex Mirror” (1975); the elegist of “Your Name Here” (2000).
Taki in Spectator:
Fitzgerald was famously obsessed with the mysteries of great wealth, but back then wealth was something new among Americans. Poor old Scott wrote more about the ruinous effects of wealth, which is a very large theme even today. I recently read a couple of articles on Fitzgerald, one claiming that he wrote Gatsby in Great Neck, Long Island, where the action takes place, the other that he wrote the greatest of American novels in Antibes. I believe both writers are correct. Fitzgerald started the novel in Long Island and finished it in Antibes. Detective Taki solves the riddle in one short declarative sentence.
Scott and Zelda’s two granddaughters, their mother being Scottie, the couple’s only issue, are very much with us and recently visited Juan-les-Pins and the hotel that was once the home of their grandparents. The cruel irony is that Scott died broke and forgotten, and his granddaughters are very rich because of his immortal work. Although The Great Gatsby is considered Scott’s greatest work, the greatest literary critic of our time, Taki, thinks otherwise. He gives the nod to Tender Is the Night. When Fitzgerald showed Papa the manuscript of Gatsby, Hemingway did not brood, he sprang into action. He came up with The Sun Also Rises, not a bad response. It was almost America versus Europe. Great stuff. Which brings me to Hollywood. The best that degenerate place has managed in filming either writer’s works was — in my not so humble opinion — The Short Happy Life of Francis Macomber, Papa’s short story about grace under pressure.
Jessica Schmerler in Scientific American:
Most of us have experienced writer’s block at some point, sitting down to write, paint or compose only to find we can’t get the creative juices flowing. Most frustrating of all, the more effort and thought we put into it, the harder it may become. Now, at least, neuroscientists might have found a clue about why it is so hard to force that creative spark. Researchers at Stanford University recently set out to explore the neural basis of creativity and came up with surprising findings. Their study, published May 28 in Scientific Reports, suggests the cerebellum, the brain region typically associated with movement, is involved in creativity. If so, the discovery could change our understanding of the neurological mechanisms behind some thought processes.
There is a scientific belief that the cerebral cortex is the part of the brain that “makes us human,” and that the two hemispheres of the cortex differentiate the creative thinkers from the logical thinkers (the “right-brained” from the “left-brained”). This has fostered the view that “neurological processes can be divided into “higher” cognitive functions and “lower” basic sensory-motor, functions,” says Robert Barton, an evolutionary biologist at Durham University in England who was not involved in this study—but the latest research calls that understanding into question.
Sean Carroll in Preposterous Universe:
The more recent “news” is not actually about warp drive at all. It’s about propellantlessspace drives — which are, if anything, even less believable than the warp drives. (There is a whole zoo of nomenclature devoted to categorizing all of the non-existent technologies of this general ilk, which I won’t bother to keep straight.) Warp drives at least inspired by some respectable science — Miguel Alcubierre’s energy-condition-violating spacetime. The “propellantless” stuff, on the other hand, just says “Laws of physics? Screw em.”
You may have heard of a little thing called Newton’s Third Law of Motion — for every action there is an equal and opposite reaction. If you want to go forward, you have to push on something or propel something backwards. The plucky NASA engineers in question aren’t hampered by such musty old ideas. As others have pointed out, what they’re proposing is very much like saying that you can sit in your car and start it moving by pushing on the steering wheel.
I’m not going to go through the various claims and attempt to sort out why they’re wrong. I’m not even an engineer! My point is a higher-level one: there is no reason whatsoever why these claims should be given the slightest bit of credence, even by complete non-experts. The fact that so many media outlets (with some happy exceptions) have credulously reported on it is extraordinarily depressing.
Now, this might sound like a shockingly anti-scientific attitude. After all, I certainly haven’t gone through the experimental results carefully. And it’s a bedrock principle of science that all of our theories are fundamentally up for grabs if we collect reliable evidence against them — even one so well-established as conservation of momentum. So isn’t the proper scientific attitude to take a careful look at the data, and wait until more conclusive experiments have been done before passing judgment? (And in the meantime make some artist’s impressions of what our eventual spaceships might look like?)
No. That is not the proper scientific attitude. For a very scientific reason: life is too short.
You are being born. Feels good.
Something enormous kisses you.
Its eye surveys your revolutions.
Relaxed in your new nudity.
you work your labyrinthine ears,
those perfect disciples,
registering all that hums, ticks.
O you encyclopedia you,
you do not know what I know,
how blank the cold world can grow.
But let the addendums come later.
I listen to the dust from the city
gather on the necks of the saints
at the hospital’s exits I exit.
And so I say to you yes you:
everyone’s a fugitive. Everyone.
by Spencer Reece
from The Clerk’s Tale
Daniel Nexon, over at Duck of Minerva:
We now have a lot of different meta-narratives about alleged fraud in “When Contact Changes Minds: An Experiment in the Transmission of Support for Gay Equality.” These reflect not only different dimensions of the story, but the different interests at stake.
One set concerns confirmation bias and the left-leaning orientations of a majority of political scientists. At First Things, for example. Matthew J. Franck contrasts the reception of the LaCour and Green study (positive) with that of Mark Regnerus’ finding of inferior outcomes for children of gay parents (negative). There’s some truth here. Regnerus’ study was terminally flawed. LaCour and Green’s study derived, most likely, from fraudulent data. Still, one comported with widespread ideological priors in the field, while the other did not. That surely shaped their differential reception. But so did the startling strength of the latter’s findings, as well as the way they cut against conventional wisdom on the determinants of successful persuasion.
We might describe another as “science worked.”
This narrative sometimes strays into the triumphalist: rather than exposing problems with the way political science operates, the scandal shows how the discipline is becoming more scientific and thus more able to catch—and correct—flawed studies. Again, there’s something to this. To the extent that political scientists utilize, say, experiments, then that opens up the possibility of creating fraudulent experimental data but also of uncovering such fraud.
Peter Watts in Aeon (Illustration by Richard Wilkinson):
Rajesh Rao (of the University of Washington's Center for Sensorimotor Neural Engineering) reported what appears to be a real Alien Hand Network – and going Pais-Vieira one better, he built it out of people. Someone thinks a command; downstream, someone else responds by pushing a button without conscious intent. Now we're getting somewhere.
There’s a machine in a lab in Berkeley, California, that can read the voxels right off your visual cortex and figure out what you’re looking at based solely on brain activity. One of its creators, Kendrick Kay, suggested back in 2008 that we’d eventually be able to read dreams(also, that we might want to take a closer look at certain privacy issues before that happened). His best guess was that this might happen a few decades down the road – but it took only four years for a computer in a Japanese lab to predict the content of hypnagogic hallucinations (essentially, dreams without REM) at 60 per cent accuracy, based entirely on fMRI data.
When Moore’s Law shaves that much time off the predictions of experts, it’s not too early to start wondering about consequences. What are the implications of a technology that seems to be converging on the sharing of consciousness?
It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. The neuroscientist Giulio Tononi at the University of Wisconsin-Madison claims that consciousness reflects the integration of distributed brain functions. A model developed by Ezequiel Morsella, of San Francisco State University, describes it as a mediator between conflicting motor commands. The panpsychics regard it as a basic property of matter – like charge, or mass – and believe that our brains don’t generate the stuff so much as filter it from the ether like some kind of organic spirit-catchers. Neuroscience superstar V S Ramachandran (University of California in San Diego) blames everything on mirror neurons; Princeton’s Michael Graziano – right here in Aeon – describes it as an experiential map.
I think they’re all running a game on us. Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention).
But why would any of that be self-aware?
Shail Maryam in The Hindu:
On my last day in Tunis I was finally able to perform my ziyarat (pilgrimage) to the mausoleum of the great Sufi, Abu al-Hasan ash-Shadhili, popularly known as Imam ash-Shadhili or Sidi Belhassan. It was an overwhelming experience. An all-woman zikr was in progress when I entered the hall. The men had been relegated to an outer room and the inner hall reverberated with women’s voices singing a song about the saint. Zikr (dhikr) has many meanings ranging from prayer to recitation to repetition of an expression of praise. Here it culminated in a trance-generating incantation of ‘Allahu Akbar’ with the repetitive ‘Akbar, Akbar, Akbar’ becoming like an ‘Om’ or a Buddhist chant. I had come to Tunis to participate in a panel at the World Social Forum 2015, part of an initiative of the South Asian Dialogues on Ecological Democracy, to engage in a larger global debate on Islam and democracy. My presentation focussed on the philosophical contribution of Sufi brotherhoods such as Chishtis, Qadiris and Madaris as also of independent qalandars in the Indian subcontinent. The Chishtis and Qadiris are close cousins of the Shadhili (Shazili) brotherhood, which was important in Egypt, Tunisia, Algeria and Morocco. Sidi Belhassen was a Shadhili Sufi who came from Morocco and established his first zawiya in Tunis in 1227. In India, the Chishti order had already been established by Muinuddin Chishti from Chisht, Afghanistan. Some Shadhili and Chishti Sufis are authors of philosophical treatises.
The big question, of course, is why the antipathy between Sufis and Salafis is becoming a civil war in Islam. Indeed, the war itself is fairly one-sided, as the other side is the victim of the attack and has no strategy for a concerted counter-attack. But without romanticising either Sufism — any ‘ism’ is problematic — or the “good Muslim”, we only have to peruse early Sufi medieval texts to see how Sufi philosophies provide major sources of resistance to Salafist and other exclusionary ideologies. They go back to a period when religion and philosophy were not yet divorced. These philosophies also suggest Islam’s civilisational dialogue with Greek and Hindu-Buddhist philosophies.
A few years ago, in Pakistan, I had visited the mausoleum of the great Sufi Abul Hassan Ali Hajvari, popularly called Daata Sahib (990-1077), now behind barbed wire after its bombing in 2010. Abul Hassan Ali Hajvari is the author of Kashf Al Mahjub or The Revelation of the Veiled, a text in Persian that the philosopher Ghazala Irfan teaches at the Lahore University of Management Sciences. I had also made another pilgrimage to Pakpattan where the mausoleum of Baba Farid, one of the great Chishti Sufis, had been similarly attacked.
Wednesday, May 27, 2015
Kuniyoshi’s innovative genius came in his blending of Japanese idioms with American folk art influences as well as that of European modernism. ""His work is a distinctive expression of many strands of early twentieth-century American art flavored with his sly humor, idiosyncratic imagination, personal experience, and subtle references to his Japanese heritage," writes Moser in an essay.
It was during early visits to an artists colony in Ogonquit, Maine, sponsored by his friend and patron Hamilton Easter Field, that led Kuniyoshi to the kind of flattened spaces, squat figures and diminishing of single point perspective that marked his work, says Wolf, a professor of art at Bard College.
A visit to Europe in 1925 gave a more provocative tone to Kuniyoshi's work, as well as an interest in circuses. His 1925 Circus Girl Resting gained wide renown when it was chosen as part of a 1947 U.S. State Department funded exhibition “Advancing American Art,” a sort of traveling show of cultural diplomacy that also featured work by Hopper, O’Keeffe, Stuart Davis and Marsden Hartley.
There may be a sense in which the Greek crisis is indeed our era’s Bolshevik Revolution or Spanish Civil War, namely that it has become the destination of choice for what we might call “political travel.” Political travel involves immersing yourself in the domestic concerns of another country on the basis of their putative significance for the world at large. This can involve the desire to be there when it all happens, but it doesn’t have to—what is crucial is the desire to throw your heart and soul into mastering the internal complexities of a far-off land, in hopes of being there intellectually when it all happens. Political travel is easy to mock, but at root it reflects a perfectly respectable desire to understand your world and to change it. The problem is that like any travel it runs the risk of turning into tourism: the consumption of an “other” neatly packaged to fit into our existing mental landscape without disturbing or unsettling it.
There was a lot of (mostly leftist) political tourism over the last century, from extreme cases like Foucault on the Iranian Revolution to more forgivable ones like Chomsky on Chávez or Zizek on the Arab Spring. But the archetypal political tourist was probably Lord Byron, who joined the Greek struggle for independence in 1823.
Kevin Hartnett in Quanta (image Hannes Hummel for Quanta Magazine):
Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.
Now, as their train approached the city, Voevodsky pulled out his laptop and opened a program called Coq, a proof assistant that provides mathematicians with an environment in which to write mathematical arguments. Awodey, a mathematician and logician at Carnegie Mellon University in Pittsburgh, Pa., followed along as Voevodsky wrote a definition of a mathematical object using a new formalism he had created, called univalent foundations. It took Voevodsky 15 minutes to write the definition.
“I was trying to convince [Awodey] to do [his mathematics in Coq],” Voevodsky explained during a lecture this past fall. “I was trying to convince him that it’s easy to do.”
The idea of doing mathematics in a program like Coq has a long history. The appeal is simple: Rather than relying on fallible human beings to check proofs, you can turn the job over to computers, which can tell whether a proof is correct with complete certainty. Despite this advantage, computer proof assistants haven’t been widely adopted in mainstream mathematics. This is partly because translating everyday math into terms a computer can understand is cumbersome and, in the eyes of many mathematicians, not worth the effort.
For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.
Justin E. H. Smith over at his website:
In the online activity many young people in North America mistake for political engagement, 'white' has become a peculiar sort of insult: a flippant meme masquerading as a serious analytic category. We witness today a constant jockeying for prestige, almost entirely among white men, in which each one strives to publicly display that he is the first and only to have overcome the various pathologies, real and imagined, of white-man-hood. As the sharp critic Fredrik DeBoer has observed, this impoverishment of political debate now leaves us with the obscene and absurd phenomenon of the 'White Off':
A White Off is a peculiar 21st-century phenomenon where white progressives try to prove that the other white progressives they’re arguing with are The Real Whites. It’s a contest in shamelessness: who can be more brazen in reducing race to a pure argumentative cudgel? Who feels less guilt about using the fight against racism as a way to elevate oneself in a social hierarchy? Which white person will be the first to pull out “white” as a pejorative in a way that demonstrates the toothlessness of the concept? Within progressivism today, there is an absolute lack of shame or self-criticism about reducing racial discourse to a matter of straightforward personal branding and social signaling. It turns my stomach.
As for me, I live in Europe, I am not terribly invested in social-media battles of the sort DeBoer seems to enjoy, and so I have only a passing familiarity with the phenomena at issue. How then do I spend my time? Well, when not wondering what the hell is wrong with my fellow Americans, I often find myself thinking about Russia: What is it? What were the historical forces that made it possible for Muscovy to rise to become the principal counterhegemonic force throughout the Pax Americana of the 20th century, and to reappear, some years into the 21st, as a significant player on the world scene?
And in this connection, I have begun to wonder whether this 'white' thing is not perhaps a symptom of a distinctly 'Atlanticist' world view, and whether it might not have somewhat less purchase when one instead looks at the world from a 'Eurasianist' perspective. These are of course the sinister Aleksandr Dugin's terms, and when I invoke them I do not mean to endorse them as true, but rather to make some progress toward understanding why the Russians in particular and the citizens of the former Soviet bloc in general constitute such a peculiar tertium quid in relation to the schemes for carving up of the basic human subkinds that are general currency among American bloggers: they don't see themselves in our Atlantic-centered racial categories, and that exclusion, that irrelevance of our grids, only makes them more estranged and hostile, less NATO-oid. The war in Europe that appears to be taking shape at present is going to be between groups of people Aaron Bady, say, would call 'white', but it's pretty clear that that designation doesn't mean much to at least one of the sides, and that there's a long, deep continental history that's being overlooked when Eurasians, and notably Russians, are thought of in these Atlanticizing terms.
Victoria Schlesinger in Aeon (Marbled Salamander. Photo by Michel Gunther/Biosphoto/Corbis):
Before there is a species, there’s a muddled period of innumerable changes as a group of individuals diverges, gene by gene, from their ancestors into a new species. The point at which those tiny changes add up to a separate species has been debated since the days of Aristotle. Further complicating matters, our basic litmus test for delineating species – viable offspring – is shaky at best. We know that when grizzly bears and polar bears mate, or coyote and wolf for that matter, the two species produce hybrid young – a combination individual that reflects some of the traits of each parent. It’s no wonder that roughly 26 concepts compete for the definition of species. Species are not so much a set of fixed traits but a temporary collection of them along a fluid continuum. The field guides belie variety within a species because it is so copious and ever-changing that you couldn’t get it on paper if you wanted to.
Scientists have long recognised the incredible diversity within a species. But they thought it reflected evolutionary changes that unfolded imperceptibly, over millions of years. That divergence between populations within a species was enforced, according to Ernst Mayr, the great evolutionary biologist of the 1940s, when a population was separated from the rest of the species by a mountain range or a desert, preventing breeding across the divide over geologic scales of time. Without the separation, gene flow was relentless. But as the separation persisted, the isolated population grew apart and speciation occurred.
In the mid-1960s, the biologist Paul Ehrlich – author of The Population Bomb (1968) – and his Stanford University colleague Peter Raven challenged Mayr’s ideas about speciation. They had studied checkerspot butterflies living in the Jasper Ridge Biological Preserve in California, and it soon became clear that they were not examining a single population. Through years of capturing, marking and then recapturing the butterflies, they were able to prove that within the population, spread over just 50 acres of suitable checkerspot habitat, there were three groups that rarely interacted despite their very close proximity.
Among other ideas, Ehrlich and Raven argued in a now classic paper from 1969 that gene flow was not as predictable and ubiquitous as Mayr and his cohort maintained, and thus evolutionary divergence between neighbouring groups in a population was probably common.