Tuesday, April 15, 2014
There is thus a lively debate in Russia itself on the country’s orientation. The question is, where does the leadership stand in this debate? The answer is difficult, because not only has Russia become more autocratic under Putin, but the circle of real decision-makers has become ever smaller. According to some accounts, it may consist of no more than five people. But, reviewing the period since 2000, when Putin assumed power, it is plausible that it began with a continuation of a commitment to democracy and a market economy, associated with a growing resentment at lack of consideration on the part of the West to certain deep Russian concerns – NATO enlargement, treatment as a poor supplicant, disregard for what are seen as legitimate interests in the neighbourhood etc. Angela Stent cites a senior German official complaining of an “empathy deficit disorder” in Washington in dealing with Russia. The pathology that this caused became progressively more virulent in the intervening years, culminating in 2003 in the invasion of Iraq without any Security Council mandate, indeed, in open defiance of the UN. After this, the New York Times magazine’s Ron Suskind reported on a visit to the Bush White House in 2004 in the course of which he recounts that “an aide” (commonly supposed to be Karl Rove) “said that guys like me were ‘in what we call the reality-based community’, which he defined as people who ‘believe that solutions emerge from your judicious study of discernible reality’… ‘That’s not the way the world really works any more’, he continued. ‘We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors … and you, all of you, will be left to just study what we do.’”
For years I’ve been hearing it said that young artists think art began with Andy Warhol. It’s never been true. But now what I hear is art historians complaining that none of their students want to study anything but contemporary art. Among young art historians, it seems, to delve as far back as the 1960s is to be considered an antiquarian. “They only take my courses because they think they need some ‘background,’” one Renaissance specialist told me. “We have to accept almost anyone who applies saying that they want to study anything before the present, just to give our current faculty something to do.” What a time, when the art historians have less historical consciousness than the artists—and no wonder that the former, these days, show so little interest in what the latter actually do.
When I was a grad student (in a different field), the budding art historians I knew were studying medieval, they were studying mannerism, they were studying the Maya. No one thought of studying living artists. The most adventurous ones might be investigating Italian Futurism. Now the Futurists seem as distant as the Maya. But might this be their own fault?
We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.
And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”
Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.”
Tim Martin in The Guardian:
The years in which the young Samuel Beckett prepared and published his first collection of short stories were, as he later remarked, “bad in every way, financially, psychologically”. In late 1930 he had returned to Dublin from teaching at the École Normale Supérieure in Paris, reluctantly swapping the shabby dazzle of James Joyce’s circle and the fun of drunken nights on the town for a post lecturing at Trinity College that he soon came to hate. Painfully awkward and shy, Beckett was tortured by public speaking, and he dreaded what he called the “grotesque comedy of lecturing” that involved “teaching to others what he did not know himself”. To the horror of his parents, he resigned, bouncing disconsolately between Germany, Paris and London on a family stipend as he tried to get his first novel off the ground. Money became shorter and shorter. In the autumn of 1932, he was forced to “crawl home” to his parents in Dublin when the last £5 note his father sent him was stolen from his digs. He was 26.
At home, however, his problems were far from over. It soon became clear that Dream of Fair to Middling Women, the madcap, erudite, Joycean book he had written at speed in Paris earlier that year, was not going to be the success he imagined. During a miserable spell in London, feeling “depressed, the way a slug-ridden cabbage might expect to be”, he shopped the manuscript around to several publishers: Chatto & Windus, the Hogarth Press, Jonathan Cape and Grayson & Grayson. The letter he wrote later to a friend summarised the results of the trip. “Shatton and Windup thought it was wonderful but they simply could not. The Hogarth Private Lunatic Asylum rejected it the way Punch would. Cape was écoeuré [disgusted] in pipe and cardigan and his Aberdeen terrier agreed with him. Grayson has lost it or cleaned himself with it.” Back in Dublin, wearily recognising that Dream might be unpublishable (it appeared posthumously in 1992), Beckett devoted his remaining energy to compiling a volume of short stories. Like his novel, these covered episodes in the life of Belacqua Shuah, a Dublin student who shared the author’s obsession with Dante and Augustine as well as his hang-ups about sex.
Virginia Hughes in Nature:
Trauma is insidious. It not only increases a person’s risk for psychiatric disorders, but can also spill over into the next generation. People who were traumatized during the Khmer Rouge genocide in Cambodia tended to have children with depression and anxiety, for example, and children of Australian veterans of the Vietnam War have higher rates of suicide than the general population.
Trauma’s impact comes partly from social factors, such as its influence on how parents interact with their children. But stress also leaves ‘epigenetic marks’ — chemical changes that affect how DNA is expressed without altering its sequence. A study published this week in Nature Neuroscience finds that stress in early life alters the production of small RNAs, called microRNAs, in the sperm of mice (K. Gapp et al. Nature Neurosci. http://dx.doi.org/10.1038/nn.3695; 2014). The mice show depressive behaviours that persist in their progeny, which also show glitches in metabolism. The study is notable for showing that sperm responds to the environment, says Stephen Krawetz, a geneticist at Wayne State University School of Medicine in Detroit, Michigan, who studies microRNAs in human sperm. (He was not involved in the latest study.) “Dad is having a much larger role in the whole process, rather than just delivering his genome and being done with it,” he says. He adds that this is one of a growing number of studies to show that subtle changes in sperm microRNAs “set the stage for a huge plethora of other effects”.
I remember your square jaw
Strong and viselike
Of my hand father
That wouldn’t let go
I remember you at the bottom of the stairs
We had to go son
I remember the hat
The small brim
With its feather
You always wore
As if leaving without it
Was like being naked in the sun
I remember you standing
Behind the old glass counter
With its huge crack
weight upon your right foot
I remember that subtle smile
Showing only a portion
Of the false teeth
I remember you father asking me
With your worried look father
Why I liked that girl
With the dark skin
I never knew father
What you father
by Bill Schneberger
Monday, April 14, 2014
by Emrys Westacott
In 1930 the economist John Maynard Keynes predicted that increases in productivity due to technological progress would lead within a century to most people enjoying much more leisure. He believed that by 2030 the average working week would be around fifteen hours. Eighty-four years later, it doesn't look like this prediction will come true. Most full-time workers work two, three, or four times, that: and many part-time workers would work more hours if they could since they need the money.
So why haven't we come closer to realizing the expectations of Russell and Keynes? In their recent book, How Much Is Enough? Money and the Good Life (Other Press, 2012), Robert and Edward Skidelsky offer an interesting answer. According to them Keynes' mistake was his failure to realize that capitalism has unleashed forces that can't be brought under control. Specifically, it has greatly inflamed a natural human desire for recognition and status, turning it into an insatiable desire for ever more wealth—wealth being the number one determinant of status in our society. If we could just settle for a modest level of comfort, we could work far less. But the yearning for more wealth and more stuff now leads people to spend far more time working than they need to. The same insatiability characterizes our society as a whole. Every politician and most economists take for granted that we should be striving with all our might to achieve economic growth without limit. The wisdom of this relentless, endless pursuit of economic growth is rarely questioned.
The Skidelskys' explanation of why we still work much more than Keynes predicted isn't entirely wrong, but I don't think it's the whole story or even the most important part. It's no doubt true of some people that they are driven to work more than they need to by insatiable greed. But I suspect that far more people work the hours they do because of circumstances beyond their control. For instance, many people work long hours simply because their hourly wage is quite low, so they work overtime, or perhaps take a second job, just in order to have enough to live on. Some live in expensive metropolitan areas like Boston or San Francisco, so even though they make a good wage, they actually need a full time job even to secure a fairly modest level of comfort, given the cost of housing. Many people keep working full time, even though they'd like to retire or go part time, because only a full time job will provide indispensible benefits like health insurance and a pension. And lots of people would like to cut back the hours they work but can't for a simple reason: their boss won't let them.
But there's also another factor preventing us from achieving a more leisured and balanced lifestyle, and that is the intensely competitive social environment in which we live.
by Brooks Riley
As I hover over my life in cyberspace, I look down at the various trails emanating from me that find their way across the globe to multiple destinations, known and unknown, whether or not they were ever intended to travel that far. Interconnectivity has increased exponentially since 2009 when I bought the notebook whose recent demise forced me to confront a sea change. Up to now, I'd left a line of breadcrumbs, for Windows, for McAfee, for Google, for the NSA, for my e-mail contacts, for who knows who else. Now those breadcrumbs have become loaves and like the parable they have multiplied.
I loved my old notebook: Except for the odd update or security scan, it was just it and I, two symbiotic pals going about our business. Now I find myself constantly confronted with geek issues such as OS updates, software compatibility, multiple preference settings and cloud management. Is Microsoft my new best friend because it greets me (Hello from Seattle) and promises to guide me? Is Apple my new best friend because it promises chic design? Is Google my new best friend because it finds things, shows me where I live and offers to hardwire my nest? Is Amazon my new best friend because it delivers? None of the above. They fall into the category of useful acquaintances to whom I turn when I need them. My new best friend turns out to be my old best friend, Wikipedia, without which the world would be a poorer place for one who wants to know everything.
What does it mean to leave behind such spoors (to borrow language of the hunted), when most of the billions before us left only genetic traces in the form of offspring and descendants? An electronic version of each one of us will haunt the internet after we're gone, as immutable and indestructible as the risus rigidus of a Guy Fawkes mask on the trash heap after the party's over.
Facebook is beginning to deal with death, but only with issues of access, not with the fate of the pages themselves. Nearly 3 million Facebook users worldwide were predicted to die in 2012 alone, their pages achieving an immortality denied to their progenitors. Will famous last words be replaced by famous last entries? Will Stephen King write a ghoulish story about a Facebook user who updates his page from heaven? Will some start-up create a ‘dropped box' in cyberspace for the dearly departed? And what about all those other clouds? Your stuff is safe and backed up. You are not.
by Charlie Huenemann
In 1746, Hume returned to London after touring Europe as tutor and caretaker of the mad Marquess of Annendale. He was not sure what was next in his life. He was already 35 and somewhat ashamed of not having yet made a career for himself. He resolved to return to Scotland, but at the last minute he received an unexpected invitation to serve in a military expedition to Canada. The invitation came from Lieut.-General James St Clair, a distant relative of Hume whom he had recently met. The opportunity hit Hume at just the right time, and he wondered if this was the beginning of a career in the military.
The plan for the expedition was to approach Quebec by way of the St. Lawrence River in August. Hume set his affairs in order and reported for duty. But what followed was not the exciting onset of an adventure at sea, sails rippling in the wind, but three months of fits and starts. When the wind was not favorable, they were stuck in one harbor or another; when the wind was favorable, the orders from the Navy changed and kept them from going anywhere.
By the end of August, the orders changed dramatically. Forget Canada; the new plan was to invade the French coast and cause a distraction from the campaign taking place then around Flanders. But winds were unfavorable once again, giving St Clair the opportunity to remind the Navy that for this new assignment he had no maps, no military intelligence, no horses, and no money.
The Navy sent along a major and some ship pilots to help plan for an invasion - though, as it turned out, none of them could provide any helpful information. Thus, as Hume put it, the company "lay under positive orders to sail with the first fair wind, to approach the unknown coast, march through the unknown country, and attack the unknown cities of the most potent nation of the universe".
On September 15th, they undertook to do just that, setting out for Lorient in Brittany with about 50 ships and 4500 men, with the guidance of a map bought in a shop in Plymouth. They arrived at the French coast in the evening of September 18th. But instead of invading right away, the commanding admiral waited to land until the following morning, and on the morning they encountered winds that prevented their landing for two more days. This of course gave the French plenty of time to see them, sound alarms, and prepare a defense of some 3,000 militia, plus cavalry. The wind finally relented and the invading British troops landed, diverting at the last moment to an unoccupied section of the coast. They chased some French soldiers into the hills and issued a general declaration to villagers in the area that they would not be harmed if they did not oppose. Hume was apparently so excited that he simply co-signed this declaration "David," forgetting to supply his last name.
What followed then was the sort of comedy of errors one could easily see coming. The British troops began to poke around the unfamiliar territory, engaged in some minor skirmishes, sacked a village, and entered into a firefight in which they ended up shooting at each other. Rain kept pouring, morale was low, and many soldiers just wandered off into the French countryside.
“Did you like your father,” my friend asked?
The Tongues of His Black Boots Say
as my father sleeps the world goes on
his black boots are by the door
he left them there unlaced
the right run down at the heel
the left toe scuffed
his blue shirt hangs on a hook
wrinkled below the belt line
where every morning
its tails were tucked
there’s no forgiveness in pasts
just now and here, defeat
is the hardest epiphany
the tongues of his
black boots say
by Jim Culleny
by Gautam Pemmaraju
Suave locus voci resonat conclusus
(How sweetly the enclosed space responds to the voice)
—Horace, Satires I, iv, 76 (in Doyle, P, Echo and Reverb:
Fabricating Space in Popular Music Recording, 1900 – 1960; 2005)
The whispering gallery that runs along the inner periphery of the dome of Gol Gumbaz, the mausoleum of the medieval Bijapur sultan Muhammad Adil Shah (1626 – 56 CE) is an acoustic marvel. Multiple echoes of up to ten in number can be heard in the dome on a single clap. And a reasonably soft whisper can be heard across a distance of a hundred and thirty feet. The tourists visiting the place are mostly prone to whoop, shout, and clap with great enthusiasm, overwhelming the dome with dense sonic information. At quiet times though one can savour its rich, amplified reverberance—the timbre, colour and tone of the spoken word assumes an elevated quality, as if it were imbued by the sheen of something beyond earthly artifice.
Such sonic modulations appear to us to be of a higher order, sanctified by primordial forces. And in our own mimetic appropriations, of sermons and speeches, chants and songs, drones and dirges, we seek to texturize our words with an otherworldly aura. The use of delay effects in sound recording allows us then to ritualistically edify our anxieties and inadequacies and transpose them into reverberant solemnity.
The prosaic use of delay effects in recorded sound—echo and reverberation—has its place in modern times, but the phenomenon has for long resided in the realm of mystical experience. The Greco-Roman mythical character Echo, a nymph condemned to repeat all that she hears, is a tragic figure by all accounts. Rebuffed by Narcissus, the heartbroken Oread hides herself in woods, caves and mountain cliffs. She withers away there in loneliness, her flesh wasting away and bones turning into stone till all that is left is her voice. In this reduced, etheric spectral state, all she can do is to reply to anyone who calls out to her.
by Mathangi Krishnamurthy
All my life, I've been called a Madrasi. This is false, funny, and ironic. For those that live north of the Vindhyas in India, all four of the southern states connote a ubiquitous "Madras", or in other words the land where people speak Madrasi (otherwise knows as four distinct languages Kannada, Telugu, Malayalam, and Tamil). But Madras, or to call it by its current, official, and always locally more kosher name Chennai, was never home to me. I visited Madras, and I lived in Bombay. Madras was heat, provinciality, incoherence, and conservatism. For the longest time, it occupied the second position on a list of cities that I vowed to never inhabit. Number One is still held by New Delhi, and I hope it doesn't indulge in similarly stymieing my life plans. Hush I tell myself, lest the Gods have sharp ears. Evidence indicates otherwise, but you never know.
Madras, I am told by the many books I peruse in the hopes of gaining intellectual familiarity, is where modern India began. This old colonial outpost that had the likes of Robert Clive, Elihu Yale, and Arthur Wellesley pass through dates back to the 1640 settlement of Madraspatnam. For those seeking a primer, I highly recommend Bishwanath Ghosh's Tamarind City and of course, S.Muthiah's Madras Discovered.
Seeking this selfsame city of sepia fame, I wander off one bright Madras morning, dragging a friend and relucatant early riser to Fort St. George, one of the arteries of the colonial enterprise. Disembarking from the train at Beach station sharp at seven am, bright and caffeinated, we walk past a still sleeping old town through NSC Bose Road, and the various Chetty streets, named after differently famed members of the Chettiar community. Each street differentiates itself by the goods it sells; electrical appliances in one, upholstery in the other, plumbing equipment in yet another.
The art-deco buildings are magnificent, and often magnificently ratty. The politics of heritage preservation are apparently a nationwide phenomenon. I receive atmospheric consolation from this history that seems like so many other histories of so many other old towns. I do what any self-respecting debutante to urban studies might do, take many pictures. Fort St.George, the Armenian church with many buried Armenians and nary a community, Armenian Street, abandoned pushcarts, modernist architecture, all fodder for my newly obsessive need to know this city.
Ciprian Muresan. I'm Too Sad to Tell You. 2009
Perceptual experience is a distinctly privileged way of knowing about the world. Not only is perceptual experience ultimately the bridge between mind and world, but it also trumps other ways of knowing. When another way of knowing about the world – inference, introspection, memory, and testimony – and experience disagree with one another, experience will be kept and the other will be dropped unless we have strong reason to believe that what we're experiencing contains an illusion or a hallucination. So, if you were to tell me that my sister is in Melbourne, and later that day I saw her walking across the street from where I am here in Adelaide, I would immediately drop the belief based on testimony that she is in Melbourne. However, an alternative situation may be this: I know that my sister has a doppelgänger who lives here in Adelaide. So in this situation, if I happen to believe that you're a reliable source of information, I'd probably believe instead that I'm actually seeing my sister's doppelgänger.
The moral is this: experience, privileged as it is, is still judged against what we already happen to know – as are the other sources of knowledge. When a proposition arrived at by inference or testimony disagrees with something I already know, I'm going to subject that proposition to much closer scrutiny than I otherwise would. Furthermore, experience, privileged as it is, always involves interpretation. In the first case, where I see my sister, and the second case, where I see my sister's doppelgänger, provide me with identical data. Each of the perceptual experiences are indistinguishable from my first-person perspective. The fact that interpretation is involved in perceptual experience is what explains how it's possible to come to different conclusions from identical perceptual experiences.
So, how are we to understand "interpretation" in the context of perceptual experience? It's certainly not anything like conscious deliberation, otherwise its presence would be salient to us (and it's not at all), and further, experience would be much more plastic than it actually is, in that it would be affected by interpretation in a much more thoroughgoing way. So our background knowledge influences our perceptual experiences in a way that is automatic and unconscious. Should this consideration lead to scepticism about the reliability of our perceptual faculties giving us objective knowledge of the external world? I think the answer is clearly not.
by Mara Naselli
Minds cannot help but make meaning, even with only a suggestion of direction. When I taught manuscript editing, to put the mechanics of the work in perspective, I would write out a line of taspyograpgucal noasihfsnesnse theat qwe kcgan reasdsdo to illustrate the point. Your eye, reading the jumble above, found the letters to make the words. We make corrections and connections without thinking about them. We bend the contours of a line. We want order, not confusion, and will bring it into shape if we can.
This hunger for order applies to memory as much as it applies to reading. We know memory is plastic—it can even be invented. What interests me are the choices that occur someplace between consciousness and unconsciousness—our grasping letters that make sense and eliding the others so that the coherence of our interpretations and blindnesses are preserved. But what would a more careful reading look like? How do we allow a memory or fact to break into our consciousness and disrupt our domestic intellectual and emotional order?
On April 18, 1939, Virginia Woolf’s sister Vanessa urged her to write her memoirs, before her memory might fail her. Woolf was ready for a diversion—she had been working on a biography of the painter Robert Fry, puzzling out the difficulties of writing about another human being outside of the events of his or her life. Who was I then? she asks, turning the question onto herself. For the next year and a half, she wrote her recollections, conjuring the dead and their vanished Victorian world. “A Sketch of the Past” was edited by Jeanne Schulkind and published posthumously in 1976.
Which is to say these writings are, for all intents and purposes, works in progress, and to read them is a bit like editing them, interpreting and weighting the content, discerning a shape that might give contours to the genius they contain. To read Woolf’s draft of a memoir is to sit with her at her writing desk, after she has gone for a walk, read Chaucer, made notes on Robert Fry, written instructions to the housekeeper, or heard the drone of German planes overhead. She settles in, and we watch her wade into the past. Woolf’s writing is not simply recollection, rather her encounter makes the convergence of past and present an altogether new thing—waters not yet crossed.
by Thomas Rodham Wells
Parenthood is coming under increasing criticism as a selfish lifestyle choice. Parents' private choices to procreate impose expensive obligations on the rest of us to ensure those children have a decent quality of life and come out as successful adults and citizens, and that means massive tax-subsidies for their health, education, and so forth. We also pay to support parents' self-conception of parenthood, such as by providing lengthy paid paternal leave to allow them to ‘bond' with their children.
In addition there are environmental costs relating to the consumption of the children themselves. The choice to become a parent massively increases one's environmental footprint because it adds consumers who otherwise wouldn't have existed, and who may then go on to have children of their own. The environmental impact of a population is a function of population size multiplied by consumption per capita. Therefore, adding consumers must either lead to a greater environmental impact, or else to a politically directed reduction in per capita consumption to avoid that impact. With regard to carbon emissions, for example, it has been estimated that an American woman who has a child increases the carbon emissions she would have been responsible for by a multiple of 5.7 (source). If lots of people have children, the planet will be in even greater danger of cooking, unless all of us make very severe cuts to our consumption practices to keep humanity within the bounds of sustainability.
One might argue that children don't only impose costs on the rest of us. For example, because they may be expected to become productive workers as well as consumers, they will repay their ‘debt' to us by supporting the economic sustainability of our pension system (and thus allow us to continue to afford our habits of affluent consumption). But even if that were true to some extent, it does not affect the core criticism of the selfishness of parenthood, which is that parents do not stop to consider how their procreative decisions may affect others, including other would be parents. Since parents aren't motivated to have children by their commitment to supporting the social welfare system or otherwise contribute to society, they can claim no credit if that is how things happen to work out.
by Brooks Riley
by Carl Pierer
In the book "Existentialism – A Reconstruction" David E. Cooper devotes an entire chapter to inquiring the relation between philosophy and alienation. Cooper's interest is to make the point that "the issues of alienation are pivotal in existentialist thought" (, p.31). To do so, he includes a brief sketch of Hegel's and Marx' ideas concerning alienation. In line with these two thinkers, Cooper gives a rough outline for an argument that alienation is at the heart of the philosophical adventure. Since he is right in claiming that this take on philosophy amounts to a drastic shift in perspective for the (analytic) philosophy student, the idea deserves to be argued for.
The term "alienation" is strongly associated with the allegedly impenetrable, obscurantist writings of
Upon encountering the world, there is – at the very least – a perceived dichotomy. On the one hand, there is something that belongs to me, and there is something external to me. There is an I and a not-I. To realise this is to experience alienation. To take up Bertrand Russell's example in his Problems of Philosophy: "(…) let us concentrate our attention on the table. To the eye it is oblong, brown and shiny, to the touch it is smooth and cool and hard; when I tap it, it gives a wooden sound." (, p. 11) For the sake of argument, let us grant Russell the concept of table, of sense data, etc. Even if we strip ourselves of scepticism about these, there is an object presupposed. A something we can concentrate our attention on. This subject-object distinction is inherent in the grammatical structure of transitive verbs. To see means to see something. Without leaving the surface level, it appears that – intuitively, in our way of living - we distinguish between two entities: the I and the not-I. Alienation, then, denotes the partition of our existence into these two subsets.
by Hari Balasubramanian
In January this year, I visited Hospitalito Atitlán, a health care center for the Tz'utujil Maya in the town of Santiago Atitlán, Guatemala. There were two reasons for this visit. First, much of my healthcare work has been limited to the US system; I wanted to get a sense of what was going on in other places. Second, for many years I've been trying to learn about the indigenous cultures of the Americas; this had led me in the past to Mexico, Peru and Bolivia. I welcomed, now, this chance to spend a few days in a Mayan town in Guatemala.
Santiago Atitlán, a town of 30-40,000, is a 3-hour drive from Guatemala City, at the southwestern edge of Lake Atitlán. The lake fills a caldera formed in an eruption 84,000 years ago, and is surrounded by lush-green volcanoes, rising to over 8,000 feet. The majority of the people who live in the surrounding towns belong to one of two Mayan groups: the Tz'utujil and Kaqchikel. Santiago Atitlán is almost entirely Tz'utujil, while San Lucas Toliman is mostly Kaqchikel. Tz'utujil and Kaqchikuel also refer to two of the twenty odd Mayan languages in Guatemala (there are a few others in Mexico). All of them are still spoken, in sharp contrast to the fate of indigenous languages elsewhere in the Americas.
I arrived in Santiago Atitlán on a Sunday morning. The town is set along a slope that eases into the lake; Volcan San Pedro rises dramatically across a narrow section of the water, dominating the view. For a small town, the streets were a maze, and I lost my way each time. Many of the homes were make-shift; the farther I ascended away from the town center, the poorer the homes were. Almost all the Tz'utujil women wore brightly colored yarn based textiles with intricate patterns. On the main road along the lake's circumference, Toyota pick-up trucks – a common mode of shared local transportation – carried passengers who stood in the open rear. Then there were the brightly colored tuk-tuks, exactly like the three wheeler autos I knew in India – every one of them that I saw in Atitlán was made by Bajaj.
The pick-up trucks, the tuk-tuks, and even many of the paved roads were all new, I was told – part of the economic growth here after decades of conflict. In the last half of the 20th century, Guatemala, like other nations of Central America – El Salvador, Nicaragua, Honduras – went through a violent upheaval. The Guatemalan Civil War lasted from 1960-1996. A brutal right wing government fought against insurgents in the largely indigenous countryside. The Lake Atitlán region did not go unscathed; hundreds of people from Santiago were killed or disappeared; "everyone you talk to lost someone in his or her family" [link]. Since 1996, there's been a return to normalcy. While the region still remains relatively poor, its coffee plantations have done well, and the beautiful lake setting draws plenty of tourists.
by Sue Hubbard
It takes a certain chutzpah for an artist to dig out his early student work and put it on display for the world to access, especially in a rarefied Mayfair Gallery hidden away in a gracious Georgian house just yards from Claridges Hotel. In the case of Peter Doig, such confidence may well be underwritten by the fact that his White Canoe - a dreamy painting of a boat reflected in a lake like some post-modern version of Charon's craft - fetched the staggering sum of £5.7m in 2007 when put up for auction by Charles Saatchi.
Doig is something of an outsider. Born in Edinburgh in 1959, the son of a peripatetic shipping accountant, he lived in Trinidad from the age of two to seven, then moved to Canada until he was nineteen, where he took up such northern rituals as skiing and ice hockey. After leaving for London to study painting at St. Martin's, followed by an MA at the Chelsea College of Art, he supported himself as a dresser at the English National Opera and became absorbed in the emerging club scene frequented by the likes of performance artist Leigh Bowery and experimental film makers such as Isaac Julien. Chelsea College was a very different proposition, then, to Goldsmiths, the conceptual kindergarten that spawned Damien Hirst, Sarah Lucas and Angus Fairhurst under the éminence grise Michael Craig Martin. It was full of painters still interested in the possibilities of what paint could do, despite the popular mantra that painting was a dead form. Doig was never allied to the conceptualist YBAs, or included in Saatchi's watershed show Sensation at the Royal Academy in 1997. And, unlike many of the YBAs, he continues to work alone, without a studio full of assistants. It doesn't appeal to him be surrounded by people he has to keep busy; to become a production line. He likes the "simplicity" of paint; "the directness, the dabbling quality"; and still believes in the possibilities of being able to surprise and innovate in this most ancient of media. People are always asking him when he's going to make a film. But he's not interested. His outsider status has meant that like many émigrés, he responds best to places he knows when he is not actually there. Canada was painted whilst in London, the Caribbean from the vantage point of his Tribeca Studio.
Sunday, April 13, 2014
Steven Malanga in City Journal:
Anthropologist Napoleon Chagnon’s heart was pounding in late November 1964 when he entered a remote Venezuelan village. He planned to spend more than a year studying the indigenous Yanomamo people, one of the last large groups in the world untouched by civilization. Based on his university training, the 26-year-old Chagnon expected to be greeted by 125 or so peaceful villagers, patiently waiting to be interviewed about their culture. Instead, he stumbled onto a scene where a dozen “burley, naked, sweaty, hideous men” confronted him and his guide with arrows drawn.
Chagnon later learned that the men were edgy because raiders from a neighboring settlement had abducted seven of their women the day before. The next morning, the villagers counterattacked and recovered five of the women in a brutal club fight. As Chagnon recounts in Noble Savages: My Life Among Two Dangerous Tribes—The Yanomamo and the Anthropologists (originally published in 2013 and now appearing in paperback), he spent weeks puzzling over what he had seen. His anthropology education had taught him that kinsmen—the raiders were related to those they’d attacked—were generally nice to one another. Further, he had learned in classrooms that primitive peoples rarely fought one another, because they lived a subsistence lifestyle in which there was no surplus wealth to squabble about. What other reason could humans have for being at one another’s throats?
Over at Existential Comics:
David Meir Grossman in Tablet (Steve Reich, 2005. (Photo treatment Tablet Magazine; original photo Jeffrey Herman):
1. “You’re floating 10 feet off the Earth. Try to put your feet on the ground and ask the next question.” It’s Wednesday, March 26, and Steve Reich is haranguing me for my sucky interviewing skills. We’re talking over the phone because it’s two days before the Big Ears Festival, in Knoxville, Tenn., which Reich is headlining. That he’s less than pleased with my interviewing ability is in fact only making me more nervous, because Reich is a legitimate genius who has changed the shape of his chosen field. The New York Times called him “our greatest living composer,” and The New Yorker has said he’s “the most original musical thinker of our time.” So, if he says I’m blowing this, he’s probably right.
Reich is impatient, a quality that surely comes from having a mind that works 10 times faster than everyone else’s, most definitely including mine. At one point in our conversation I try to suggest that “WTC 9/11,” his disturbing 15-minute meditation on Sept. 11 that came out in 2011, reminds me of the Internet. The piece, written for the Kronos Quartet, uses one of Reich’s several trademark techniques, that of vocal sampling. Unlike other Sept. 11-related pieces, “WTC” does not offer redemption. Reich bumps the pre-recorded voices—friends, air-traffic controllers, first responders, cantors—shoulder-to-shoulder and cuts off the words mid-sentence, only to complete them later. It’s a tension-filling technique and can call to mind the way conversations take place over the Internet. Reich sees where I’m going with this and pointedly cuts me off. “I don’t follow chats, I don’t find it very interesting to do that. What I was doing on ‘WTC’ had nothing to do with the Internet whatsoever, OK?”
Corey Robin in Crooked Timber (image from Wikimedia Commons):
The first night of Passover is on Monday, and I’ve been thinking about and preparing for the Seder. I had a mini-victory this morning, when I was shopping for fish in Crown Heights. The guy at the fish store told me that thanks to the Polar Vortex, 90% of Lake Huron is frozen. Which means no whitefish. Which means no gefilte fish. So I put on my best impression of Charlotte in Sex and the City —”I said lean!”—and managed, through a combination of moxie and charm, to get him to give me the last three pounds of whitefish and pike in Crown Heights. Plus a pound of carp. Which means…gefilte fish!
Food is the easy part of the seder. The hard part is making it all mean something. When I was a union organizer, I used to go to freedom seders. Being part of the labor movement, I found it was easy to to see points of connection between what I was doing and this ancient story of bondage, struggle, and emancipation (a story, however, that we never seem to really tell at Passover).
Then, as my feelings about Zionism became more critical, I found a new point of connection to Passover: using the Seder, and the Exodus story, as a moment to reflect upon the relationship between the Jews, the land of Israel, and possession of that land, to ask why we think of emancipation in terms of possession. For a while there, we’d hold seders with readings fromMichael Walzer’s Exodus and Revolution and Edward Said’s brilliant critique of Walzer inGranta: “Michael Walzer’s Exodus and Revolution: A Canaanite Reading.”
But nowadays, the Seder is harder for me. I’m more puzzled by the meaning of slavery and emancipation; I find it more difficult to make the connections I used to make. The Haggadah seems stranger, more remote, than ever.
Keith Robinson and Angel Harris in the NYT (image from Wikimedia Commons):
Over the past few years, we conducted an extensive study of whether the depth of parental engagement in children’s academic lives improved their test scores and grades. We pursued this question because we noticed that while policy makers were convinced that parental involvement positively affected children’s schooling outcomes, academic studies were much more inconclusive.
Despite this, increasing parental involvement has been one of the focal points of both President George W. Bush’s No Child Left Behind Act and President Obama’s Race to the Top. Both programs promote parental engagement as one remedy for persistent socioeconomic and racial achievement gaps.
We analyzed longitudinal surveys of American families that spanned three decades (from the 1980s to the 2000s) and obtained demographic information on race and ethnicity, socioeconomic status, the academic outcomes of children in elementary, middle and high school, as well as information about the level of parental engagement in 63 different forms.
What did we find? One group of parents, including blacks and Hispanics, as well as some Asians (like Cambodians, Vietnamese and Pacific Islanders), appeared quite similar to a second group, made up of white parents and other Asians (like Chinese, Koreans and Indians) in the frequency of their involvement. A common reason given for why the children of the first group performed worse academically on average was that their parents did not value education to the same extent. But our research shows that these parents tried to help their children in school just as much as the parents in the second group.
Even the notion that kids do better in school when their parents are involved does not stack up.