What really went on in Wonderland

150608_r26609-320Anthony Lane at The New Yorker:

Legend has it that a book came out of a boat trip, but nothing is ever that simple. The mathematician, moonlighting as an alchemist, turned things both animate and inanimate into different substances. Dodgson became a dodo (a word that toys not just with extinction but with Dodgson’s own tendency to stammer), while Duckworth, who later became chaplain to Queen Victoria, shrank into a duck; both creatures splash about not in a sun-warmed river but in a pool of a child’s tears. Alice Liddell became “Alice,” with no surname to tether her. “Alice’s Adventures Underground” became what we call, for the sake of convenience, “Alice in Wonderland,” although there is no such book. “Alice’s Adventures in Wonderland” was published in 1865; the hundred-and-fiftieth anniversary has been widely celebrated this year. In 1871 came “Through the Looking-Glass, and What Alice Found There”—another title that we often elide or get wrong. In that fable, our heroine walks into a wood where objects lose their names. She puts her hand on a tree, and can’t summon the word for it. Even her own identity escapes her: “Then it really has happened, after all! And now, who am I?”

Douglas-Fairhurst is at home with transformation. His previous work, “Becoming Dickens” (2012), the best and the most fine-fingered of the many books published to coincide with the bicentenary of the novelist’s birth, touched upon the genesis of “The Pickwick Papers,” “Oliver Twist,” and other early successes. If Dickens scholarship is a crowded field, however, Carroll studies should have a sign nailed firmly above the door: “Standing Room Only.”

more here.

How textiles repeatedly revolutionized technology

Virginia Postrel in Aeon:

ScreenHunter_1219 Jun. 09 15.56The story of technology is in fact the story of textiles. From the most ancient times to the present, so too is the story of economic development and global trade. The origins of chemistry lie in the colouring and finishing of cloth. The textile business funded the Italian Renaissance and the Mughal Empire; it left us double-entry bookkeeping and letters of credit, Michelangelo's David and the Taj Mahal. As much as spices or gold, the quest for fabrics and dyestuffs drew sailors across strange seas. In ways both subtle and obvious, textiles made our world.

Most conspicuously, the Industrial Revolution started with the spinning jenny, the water frame, and the thread-producing mills in northern England that installed them. Before railroads or automobiles or steel mills, fortunes were made in textile technology. The new mills altered where people lived and how they worked. And the inexpensive fabrics they produced changed the way ordinary people looked.

Then, a second conspicuous wave of textile innovation began with the purple shade that francophile marketers named mauve. The invention of aniline dyes in the mid-19th century made a full spectrum of colour – including newly intense blacks – universally available. The synthetic-dye business gave rise to the modern chemical industry, and yet more technology-based fortunes.

More here.

Arabic-language, M.I.A.-inspired Israeli girl band

Gaar Adams in Foreign Policy:

ScreenHunter_1218 Jun. 09 15.52The music video for “Habib Galbi” (Love of My Heart), a sorrowful Yemeni folk song, opens with a simple shot across the desert. Inside a small hut, an exasperated woman pulls back the woven curtain of a Bedouin tent and croons in Arabic over a hollow, hypnotic drumbeat and ghostly minor key: “Love of my heart and eyes, it is a wonder who has turned you against me.”

From the shisha-smoking old lady with kohl-lined eyes, to the Yemeni dance sequences and classically Arabic mournful undertones, “Habib Galbi” looks like it could be straight out of southern Arabia. And in some ways, it is: The song is sung in authentic Yemeni dialect and is composed from the lyrics of ancient Yemeni folk songs. When a Yemeni friend recently played “Habib Galbi” for his elderly grandmother in Sanaa, their accents were so good she thought that the all-girl singing trio might be from the Haraz, a rugged mountainous region just west of the capital.

But the sandy landscape in the music video is far from the Haraz Mountains — it was shot over 1,500 miles away in the Arabah region near the Mediterranean Sea. Though the Arabic may sound effortless, those singing it actually only know the language as a second tongue. And the band — called A-Wa, a stylized transliteration of Arabic slang for “yeah” — hasn’t even come close to stepping foot in Yemen. They’re Israeli.

More here.

Hacking the Brain: How we might make ourselves smarter in the future

Maria Konnikova in The Atlantic:

BrainThe perfectibility of the human mind is a theme that has captured our imagination for centuries—the notion that, with the right tools, the right approach, the right attitude, we might become better, smarter versions of ourselves. We cling to myths like “the 10 percent brain”—which holds that the vast majority of our thinking power remains untapped—in part because we hope the minds of the future will be stronger than those of today. It’s as much a personal hope as a hope for civilization: If we’re already running at full capacity, we’re stuck, but what if we’re using only a small fraction of our potential? Well, then the sky’s the limit. But this dream has a dark side: The possibility of a dystopia where an individual’s fate is determined wholly by his or her access to cognition-enhancing technology. Where some ultra-elites are allowed to push the limits of human intelligence, while the less fortunate lose any chance of upward mobility. Where some Big Brother–like figure could gain control of our minds and decide how well we function.

What’s possible now, and what may one day be? In a series of conversations with neuroscientists and futurists, I glimpsed a vision of a world where cognitive enhancement is the norm. Here’s what that might look like, and how we can begin thinking about the implications.

More here.

The Evidence Points to a Better Way to Fight Insomnia

Austin Frakt in The New York Times:

SleepInsomnia is worth curing. Though causality is hard to assess, chronic insomnia is associated with greater risk of anxiety, depression, hypertension, diabetes, accidents and pain. Not surprisingly, and my own experience notwithstanding, it is also associated with lower productivity at work. Patients who are successfully treated experience improved mood, and they feel healthier, function better and have fewer symptoms of depression. Which remedy would be best for me? Lunesta, Ambien, Restoril and other drugs are promised by a barrage of ads to deliver sleep to minds that resist it. Before I reached for the pills, I looked at the data. Specifically, for evidence-based guidance, I turned to comparative effectiveness research. That’s the study of the effects of one therapy against another therapy. This kind of head-to-head evaluation offers ideal data to help patients and clinicians make informed treatment decisions. As obvious as that seems, it’s not the norm. Most clinical drug trials, for instance, compare a drug with a placebo, because that’s all that’s required for F.D.A. approval. In recognition of this, in recent years more federal funding has become available for comparative effectiveness research.

When it comes to insomnia, comparative effectiveness studies reveal that sleep medications aren’t the best bet for a cure, despite what the commercials say. Several clinical trials have found that they’re outperformed by cognitive behavioral therapy. C.B.T. for insomnia (or C.B.T.-I.) goes beyond the “sleep hygiene” most people know, though many don’t employ — like avoiding alcohol or caffeine near bedtime and reserving one’s bed for sleep (not reading or watching TV, for example). C.B.T. adds — through therapy visits or via self-guided treatments — sticking to a consistent wake time (even on weekends), relaxation techniques and learning to rid oneself of negative attitudes and thoughts about sleep.

More here.

Tuesday Poem

Homan and Chicago Ave.

Cross the blood

that quilts your busted lip
with the tender tip
of   your tongue. That lip’s
blood is brackish and white
meat flares from the black
swell. You crossed your mama’s
mind so call her sometimes.
She dreams your dead daddy
still puts his hands on her
waist. She calls his name
then crosses herself, calls
the police then crosses
her fingers. Cross me
and get cut across your cheek,
its fat bag full of   bad words
and cheap liquor you hide
from your badass kids. Make
a wish for bad weather
when the hoodlums get to shooting
in a good summer’s heat.
Cross the territory between
two gangs and feel eyes stare
and cross in a blur of crosshairs.
When a shot man lands
in the garden of trash the block
flares up like an appetite
spurred on by the sight
of prey, by the slurred
prayer of a man so death-close
he sees buzzards burrow
their bladed beaks into
his entry wound. Tune
the trumpets. Make way through
dusk’s clutter. After death
the dead cross over into song,
their bones tuning-forked
into vibrancy. Cross your lips,
mutiny against all speech
when a corpse starts singing
despite its leaded larynx. Don’t
say miracle when butterflies
break from a death-gaped skull,
rout the sky, and scatter.
.

by Phillip B. Williams
from Poetry Magazine, 2013

Mind Your Own Business

Haney-mindful-Ehrenreich1

Barbara Ehrenreich in The Baffler (image by Lisa Haney):

At about the beginning of this decade, mass-market mindfulness rolled out of the Bay Area like a brand new app. Very much like an app, in fact, or a whole swarm of apps. Previous self-improvement trends had been transmitted via books, inspirational speakers, and CDs; now, mindfulness could be carried around on a smartphone. There are hundreds of them, these mindfulness apps, bearing names like Smiling Mind and Buddhify. A typical example features timed stretches of meditation, as brief as one minute, accompanied by soothing voices, soporific music, and images of forests and waterfalls.

This is Buddhism sliced up and commodified, and, in case the connection to the tech industry is unclear, a Silicon Valley venture capitalist blurbed a seminal mindfulness manual by calling it “the instruction manual that should come with our iPhones and BlackBerries.” It’s enough to make you think that the actual Buddha devoted all his time under the Bodhi Tree to product testing. In the mindfulness lexicon, the word “enlightenment” doesn’t have a place.

In California, at least, mindfulness and other conveniently accessible derivatives of Buddhism flourished well before BlackBerries. I first heard the word in 1998 from a wealthy landlady in Berkeley, advising me to be “mindful” of the suffocating Martha Stewart-ish decor of the apartment I was renting from her, which of course I was doing everything possible to un-see. A possible connection between her “mindfulness” and Buddhism emerged only when I had to turn to a tenants’ rights group to collect my security deposit. She countered with a letter accusing people like me—leftists, I suppose, or renters—of oppressing Tibetans and disrespecting the Dalai Lama.

More here.

The Science of Scarcity

MJ15_p38_01

Cara Feinberg in Harvard Magazine (Photograph by Jim Harrison):

TOWARD THE END of World War II, while thousands of Europeans were dying of hunger, 36 men at the University of Minnesota volunteered for a study that would send them to the brink of starvation. Allied troops advancing into German-occupied territories with supplies and food were encountering droves of skeletal people they had no idea how to safely renourish, and researchers at the university had designed a study they hoped might reveal the best methods of doing so. But first, their volunteers had to agree to starve.

The physical toll on these men was alarming: their metabolism slowed by 40 percent; sitting on atrophied muscles became painful; though their limbs were skeletal, their fluid-filled bellies looked curiously stout. But researchers also observed disturbing mental effects they hadn’t expected: obsessions about cookbooks and recipes developed; men with no previous interest in food thought—and talked—about nothing else. Overwhelming, uncontrollable thoughts had taken over, and as one participant later recalled, “Food became the one central and only thing really in one’s life.” There was no room left for anything else.

Though these odd behaviors were just a footnote in the original Minnesota study, toprofessor of economics Sendhil Mullainathan, who works on contemporary issues of poverty, they were among the most intriguing findings. Nearly 70 years after publication, that “footnote” showed something remarkable: scarcity had stolen more than flesh and muscle. It had captured the starving men’s minds.

More here.

A Crisis at the Edge of Physics

07GRAY-blog427

Adam Frank and Marcelo Gleiser in the NYT (image by Gérard DuBois):

DO physicists need empirical evidence to confirm their theories?

You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.

A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”

Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility.

How did we get to this impasse?

More here.

How elite soccer illustrates an ancient paradox and a current problem

by Emrys Westacott

The market is efficient. The market knows best. This belief underlies much contemporary theory and practice, especially in the realm of government policy. It is has been used, for instance, to justify privatizing the railways and the post office in the UK, and it forms a central plank in the arguments of those who oppose a government run national health care system in the US. Imgres

The basic idea is simple enough. People express their preferences through their spending habits; they vote with their wallets. If DVDs replace video tapes, or if Amazon puts Borders Books out of business, that is just efficiency in action, with the market performing the function that natural selection performs in the course of evolution. And just as evolutionary biologists do not criticize environmental conditions (although they may sometimes put on another hat and seek to protect threatened species or habitats), so economists, insofar as they are trying to be scientific, will not criticize consumer preferences. About expressed preferences there is no disputing.

But of course, as engaged, concerned, interested, moralizing, and occasionally sanctimonious human beings, most of us do make value judgements about people's preferences. We do this in one of two ways.

1) We normatively judge the preferences themselves. E.g. we criticize people (including ourselves) for drinking too much, eating unhealthy foods, watching stupid TV shows, spending too much time playing video games, or engaging in conspicuous consumption. And we applaud people for learning new skills, cultivating their talents, supporting a local enterprise, or giving to charity.

2) We evaluate how well people's preferences, as expressed through their actions, will help them realize their ultimate goals. E.g. Teachers tell students that if they want to be professionally successful they should study more and party less. Psychologists tell us all that if we want to make ourselves happier we should spend less on ourselves and more on others.

Often, the first sort of evaluation is really a version of the second, but that needn't concern us here. It's the second kind that interests me.

We all often act on specific short-term preferences in a way that produces long-term consequences that are contrary in some ways to what we really desire. The paradox that by pursuing what we think we want we fail to attain what we really want was first explored by Plato in the Gorgias and the Republic.[1] I believe top-flight soccer offers an interesting and instructive illustration of this paradox.

Read more »

An Interview with Jeffrey Renard Allen

by Randolyn Zinn

Jeff AllenRead Jeffrey Renard Allen’s masterful novel Song of the Shank (published by Greywolf Press) and you'll meet Thomas Greene Wiggins, a 19th century slave and musical genius who performed as Blind Tom. The book earned rave reviews, was named a New York Times’ Notable Book of 2014, and was a finalist for the PEN Faulkner Award. This spring Allen was awarded a Guggenheim Foundation award to write a new book and this fall he will become a Professor of Creative Writing at the University of Virgina. He earned a PhD in English and Creative Writing from the University of Illinois at Chicago and has been a Professor of English at Queens College and an Instructor in the MFA Creative Writing Program at The New School.

We met at the Cornelia Street Café in NYC last month where our conversation began with a discussion about the style of Song of the Shank.

Randolyn Zinn: Your narrative stance reaches deeply into the heart of whatever you’re describing, be it place, period, landscape or a character’s interiority.

Jeffrey Renard Allen: That sounds about right. I was having a conversation about this with my editor. I said that I have a thick style. Meaning that in this book, in particular, there are a lot of voices. I am an expansive writer and this density happens at the level of the sentence. Or the paragraph. I’m interested in all the avenues of a character.

RZ: You don’t stand back at a distance describing characters; you write from the center of his or her experience and readers are pulled right in.

JA: Yeah, I’m very much about trying to write through the mind of the character, yet have enough liberty to be elastic to do interesting things with the language.

RZ: You don’t use quotation marks around dialogue.

JA: Maybe I have in some stories. But since I began to write seriously, going back to the 80s, I’ve tended to do away with that.

RZ: Why? Because it feels extraneous? Because it’s obvious, as readers, that we understand when a character is speaking?

JA: I think there are a couple of things. Some of it comes from studying writers I like. Joyce was the first person to do away with quotation marks around dialogue. Other writers don’t use them: John Edgar Wideman, a lot of Faulkner, Cormac McCarthy. When you do away with the quotation marks, t forces the reader to pay attention to what’s happening on the page. The writer makes the narration and the action blend in with the dialogue. It all becomes one voice in a way, even though you still have the distinct voices of the characters, their speech. I like that the language can work in such a way that it all blends together.

Read more »

How not to be afraid of death

by Charlie Huenemann

“I’m not afraid of death; I just don’t want to be there when it happens.” —Woody Allen

DT40Set aside any belief in an afterlife, even the vaguely hopeful “I’ll return to the energy of the universe” sort of view. The realization that your run of life is finite is troubling. At first, when we begin to think about the full extent of our lives, we tend to think of that extent as a short stretch of time found within a very broad scope of time: I exist for several decades within – what? – billions and billions of years. It’s a tiny blip, hardly anything at all. And, automatically, we associate the very short episode called “our lives” with more ordinary episodes, like seeing a movie on a Sunday afternoon. In that case, we enjoy the movie, and after that, we drive home. But then a second realization hits: after this life, there will be no driving home. There will not be anything for us – no recalling of favorite moments, no do-overs, not even a moment of nostalgia. Nothing. That life we just had will be all we ever are, forever. The pit of existential despair opens before us, and boy howdy, does it ever stink.

Both Socrates and Seneca defined philosophy as preparing for death, and there’s no denying that if we haven’t come to terms with this fact – I will die – we have not yet found wisdom. Now I’m as foolish and troubled as anyone else, but I have come across a line of thought that, at least when I manage to remember it, makes that pit of existential despair disappear.

The line of thought comes from Wittgenstein’s Tractatus Logico-Philosophicus, though it can also be found in the writings of Epicurus. They diagnose our problem as arising from that first view we adopted, the one that sees life as an episode within a larger frame of time. Sometimes that perspective is perfectly accurate: namely, when we look at other people’s lives, and we note how there were things happening both before they lived and after they died. Their lives are rather like Sunday afternoon movies to us in this regard. But – according to the Epicurus/Wittgenstein line of thought – when it comes to our own lives, that same picture does not apply: for of course there will not be, in our lives, any events before we live or after we die. When I try to adopt a perspective that sees my life as a short expanse of time within a larger expanse of time, I am trying to adopt a nonsensical point of view. I am trying to view my life from a life that is both my life and not my life.

Read more »

Few Thoughts about Pegasus

by Carl Pierer

4064449132_7452ccd2d1_oLet us suppose Pegasus does not exist. This simple idea has proven to lead to plenty of philosophical trouble. Because what exactly is the thing that does not exist? Quine puts the “Riddle of Non-being” as: “Nonbeing must in some sense be, otherwise what is it that there is not?” The problematic coin has two sides. First, it seems that in supposing to talk about Pegasus at all, we are simultaneously asserting that something that answers to the name of Pegasus is – in some sense. This is the semantic side: If Pegasus is not, in any sense of the word, what would we be talking about?

In the same essay, Quine states the problem of ontology as: “What is there?” and answers immediately: “Everything”. This approach leads to the same problem, albeit from a different angle: if everything exists, how can we deny the existence of any particular thing, e.g. Pegasus? We may call this the logical side of the problem. For example, if A says Pegasus flies, then A is committed to the claim that something that flies exists. However, if A says Pegasus does not exist, how can the obvious contradiction of asserting that something exists that does not exist be avoided?

Quine proposes the following. The apparent contradiction in stating that something does not exist can be resolved thusly: a statement denying the existence of something, say Pegasus, can be analysed in terms of its logical structure. So, to say that Pegasus does not exist means simply ~∃x (x is Pegasus).

This by itself does not solve the problem, as a further instance of existential generalisation creates the same problem this was set to solve: ~∃x (x is Pegasus) becomes ∃y~∃x (x is y) – meaning again there exists a thing such that it does not exist. To avoid this trouble, Quine suggests – following Russell – that the proper name “Pegasus” can be substituted by a description, e.g. “the winged horse that was captured by Bellerophon”. Hence if F = the winged horse that was captured by Bellerophon, the sentence becomes: ~∃x Fx and no existential generalisation can be made. The logical part of the problem is thus solved.

Read more »

Translations from Urdu: Three Poems by Majeed Amjad

by Ali Minai

299190_10150747706740262_6590189_nMajeed Amjad (1914 – 1974) is considered one of the most important modern poets in the Urdu language. He was born in Jhang, which is now in Pakistan, and spent most of his life in the small towns of Punjab, away from the great literary centers of Urdu. Perhaps this was one factor in giving his poetry a distinctive style and idiom that is impossible to place within any of the mainstream contemporary movements in Urdu poetry. Amjad's style is characterized by striking images, unexpected connections, and a very personal voice. He had a challenging life, with financial insecurity, domestic problems and literary frustrations. His philosophical and introspective nature drew upon these challenges to create a unique mixture of sweetness and bitterness that makes him one of Urdu's most original poets. Starting out with traditional forms, Amjad experimented extensively with new ones, and much of his later poetry is in free verse.

I have chosen to translate poems by Amjad because, despite the acknowledgment of his stature in literary circles, he is not as well known among general audiences as his great contemporaries, Faiz and Rashid. I chose these three poems based purely on personal preference, though they are also quite representative of his work. In particular, they capture his characteristically mysterious allusions, where he seems to refer to something particular without specifying exactly what it is, leaving the reader to infer multiple scenarios. Personally, I find this to be both aggravating and interesting – and a very modern aspect of his work, occasionally bordering on the surrealistic. The poems also have a lot of psychological nuance, which was another distinguishing feature of Amjad's poetry.

In the original, the first two poems are in metered verse and the third in free verse. While I have tried to follow the general structure of the poems, I have not attempted to translate strictly line by line, preferring to capture the thought rather than the form. In this sense, the translation is not literal, though it is quite close with minimal reinterpretation of metaphors, etc. As with all translations, it is impossible to capture all the nuances of the original. I just hope that the translated versions have sufficient interest in their own right and convey some of Amjad's uniquely mysterious, imagistic and elegiac style.

____________________________________________________________________________________

Poem 1: Superficially, this poem starts out as an elegy on the grave of some unknown poet, with the usual symbolism associated with such poems. But as one reads on, it becomes clear that this is not about any particular poet at all, nor is it an elegy. It is rather a fierce critique of that poetic tradition – long dominant in Urdu – that seeks to create art for art's sake, and has little time for the actual lives of individuals and societies. In this, Amjad is making the same point that many of his Progressive contemporaries – notably Faiz – made about the received poetic tradition in Urdu. But Amjad's allusive and imagistic style contrasts strongly with the explicit protests found in the work of the Progressives. The build-up through this poem culminates lines that send chills down the spine.

Amjad has been called a poet of brutal realism. In some of his poems, this realism is explicit, but here it is couched in a more symbolic – perhaps more appealing – form.

Voice, Death of Voice (1960)

No ornate ceiling, nor canopy of silk;

no shawl of flowers; no shadow of vine;

just a mound of earth;

just a slope covered with rocky shards;

just a dark space with blind moths;

a dome of death!

No graven headstone, no marking brick –

Here lies buried the eloquent poet

whom the world implored a thousand times

to speak out,

but he, imprisoned by his fancy's walls,

far from Time's path,

oblivious to the lightning upon the reeds,

drowned himself in the breast of a silent flute:

a voice become the death of voice!

Read more »

Crime hurts, justice should heal

by Thomas R. Wells

Ex-teacher-gets-1-year-in-jailJudicial punishment is the curious idea that individuals deserve to be punished by the state for breaking its laws. Intellectually this is rather counter-intuitive. If crime is so terrible because it is a social trauma then deliberately hurting more people would seem to amplify that trauma rather than treat it. There are intellectual arguments for retributive punishment of course, many of them rather ingenious, but they have the look of post hoc rationalisations for a brute social fact: we just like the idea of hurting bad people – even if these days their suffering is the mental torture of prison rather than the rack.

The modern criminal justice system – bloated and terroristic – is the product of government expansionism combined with this societal lust for vengeance.

II

In theory there are great advantages to having the state administer criminal justice – i.e. as a prosecutor and punisher rather than merely as a judge – such as ensuring some baseline of fair treatment for less powerful victims and defendants. However, these are not guaranteed. For example, it is a well-studied fact that young African-American men, a minority stereotyped as especially liable to criminality, are more likely to be stopped by government agents, arrested, charged with a higher crime, denied bail, found guilty, and sentenced to a harsher punishment.

This is not the only way that the state's takeover of criminal justice goes awry. By converting crime from a relationship between victim and perpetrator to a relationship between a criminal and the state it has justified a vast expansion of what is criminalised and of the severity of punishment. The problem of crimes such as rape are conceived not primarily as harms to specific people that need to be redressed, but as transgressions of laws that represent the will of society. All crimes are now offenses against the dignity of Society, as represented by the government. The democratic requirement that justice must be seen to be done means that the moral indignation of society as a whole drives the government's punishment decisions, not the interests or wishes of actual victims of crimes.

Locking millions of people into squalid little boxes for years on end doesn't make much sense if you take away its real motivation: the naked desire to make society's enemies suffer. Besides being a very inefficient – socially expensive – means of hurting people (something I've discussed elsewhere), the mental suffering of prison does little to advance the supposed moral goals of criminal justice.

Read more »

Art in a Disenchanted World

by Mathangi Krishnamurthy

Kochi One

In the middle of a semester of endless world travel, and a series of screechy deadlines, I gifted myself a three-day weekend to go meander at the Kochi Muziris Biennale of 2014/15. Our survival as, dare I say, members of a sensate world, depends on the idea of a full life, and into every full life, some art must fall is what I told myself as I made plans to visit. Gathering up a friend, and all my depleting stamina, I boarded a plane and then a cab to reach the wonderfully lovely town of Fort Kochi across the breadth of which were strewn the venues for this year's installations enunciating “Whorled Explorations”. 94 artists from 30 countries held court for a hundred and eight days across thirty venues.

Even as I disembarked prepared to be impressed, the superbly humid Kochi weather seeped slowly into my skull, rendering inchoate my cultural ambitions. Kochi is by the sea, the month was February, and we were catching summer in all its ambitious force. Our charming inn-keeper had been pretty certain over the phone when confirming our booking that we would not need an air-conditioned room. It's a good thing he left the choice open. The air-conditioning was all that lay between us and a lifetime vow to never pursue art. Spoilt; I know.

Read more »

What is Innateness?

by Michael Lopresto

Forest_PathWhen it comes to explaining human cognition and human uniqueness, everyone seems to think that nature and nurture constitute a false dichotomy. Both nature and nurture work together harmoniously to contribute to the cognitive traits that make humans profoundly different to every other animal on the Earth. Unlike every other animal on Earth, humans are uniquely flexible; we have inhabited every kind of environment, engaged in intergenerational social learning, cooperated with those outside of our immediate group, accurately described things we'll never directly observe, and much more. Humans are cognitively flexible, behaviourally flexible, communicatively flexible and representationally flexible. Representational systems employed by humans are open-ended and unprecedented in the animal kingdom: natural languages like English and Chinese, artificial languages like predicate logic, formal languages like those in mathematics, pictures, diagrams, weather maps are all but a few of the representational systems employed by humans (not to mention mental representations, which are likely to be analogues of the aforementioned systems).

One of the central questions of cognitive science is explaining how humans acquire cognitive traits, including ones that contribute to human uniqueness. Is the trait for language innate or learned? Is the trait for mental time travel (the ability to experience one's past or future) innate or learned? Is the trait for moral reasoning innate or learned? And so forth.

Nativists are those who say that lots of cognitive traits are innate, and empiricists are those who say that very few cognitive traits are innate. The nativism/empiricism distinction is not to be confused with the rationalism/empiricism debate of early modern philosophy. That debate was primarily over epistemology, while the contemporary debate is primarily over psychology. However, questions of epistemology and psychology were systematically conflated, as Kant and others pointed out, and we ought to be careful not to conflate the same questions now. Even so, there are fairly clear links between the two questions. The rationalists of early modern philosophy, like Descartes and Leibniz, argued that a great many cognitive traits were innate, and the empiricists of that era, like Locke and Hume, argued that very few cognitive traits were innate. (Although those philosophers spoke in terms of “innate knowledge” and “innate ideas”—phrases that certainly need careful interpretation).

However, the question “Is cognitive trait X innate or learned?” presupposes that the concepts INNATE and LEARNED are somewhat well defined.[*] (I take it for granted that the concept COGNITIVE TRAIT is uncontroversial, i.e. phenotypic traits relating to things like thinking, inference, perception, intelligence, and so forth.) Our question certainly doesn't presuppose that for any cognitive trait it's all or nothing; totally innate or totally learned, or even totally acquired through environmental interaction.

Read more »