The G20’s Misguided Globalism


Dani Rodrik in Project Syndicate:

The G20 has its origins in two ideas, one relevant and important, the other false and distracting. The relevant and important idea is that developing and emerging market economies such as Brazil, India, Indonesia, South Africa, and China have become too significant to be excluded from discussions about global governance. While the G7 has not been replaced – its last summit was held in May in Sicily – G20 meetings are an occasion to expand and broaden the dialogue.

The G20 was created in 1999, in the wake of the Asian financial crisis. Developed countries initially treated it as an outreach forum, where they would help developing economies raise financial and monetary management to the developed world’s standards. Over time, developing countries found their own voice and have played a larger role in crafting the group’s agenda. In any case, the 2008 global financial crisis emanating from the United States, and the subsequent eurozone debacle, made a mockery of the idea that developed countries had much useful knowledge to impart on these matters.

The second, less useful idea underpinning the G20 is that solving the pressing problems of the world economy requires ever more intense cooperation and coordination at the global level. The analogy frequently invoked is that the world economy is a “global commons”: either all countries do their share to contribute to its upkeep, or they will all suffer the consequences.

More here.

Monopoly was invented to demonstrate the evils of capitalism


Kate Raworth in Aeon:

'Buy land – they aren’t making it any more,’ quipped Mark Twain. It’s a maxim that would certainly serve you well in a game of Monopoly, the bestselling board game that has taught generations of children to buy up property, stack it with hotels, and charge fellow players sky-high rents for the privilege of accidentally landing there.

The game’s little-known inventor, Elizabeth Magie, would no doubt have made herself go directly to jail if she’d lived to know just how influential today’s twisted version of her game has turned out to be. Why? Because it encourages its players to celebrate exactly the opposite values to those she intended to champion.

Born in 1866, Magie was an outspoken rebel against the norms and politics of her times. She was unmarried into her 40s, independent and proud of it, and made her point with a publicity stunt. Taking out a newspaper advertisement, she offered herself as a ‘young woman American slave’ for sale to the highest bidder. Her aim, she told shocked readers, was to highlight the subordinate position of women in society. ‘We are not machines,’ she said. ‘Girls have minds, desires, hopes and ambition.’

In addition to confronting gender politics, Magie decided to take on the capitalist system of property ownership – this time not through a publicity stunt but in the form of a board game. The inspiration began with a book that her father, the anti-monopolist politician James Magie, had handed to her. In the pages of Henry George’s classic, Progress and Poverty (1879), she encountered his conviction that ‘the equal right of all men to use the land is as clear as their equal right to breathe the air – it is a right proclaimed by the fact of their existence’.

More here.

Global Extreme Poverty

Max Roser and Esteban Ortiz-Ospina over at Our World in Data:

The most important conclusion from the evidence presented in this entry is that extreme poverty, as measured by consumption, has been going down around the world in the last two centuries. But why should we care? Is it not the case that poor people might have less consumption but enjoy their lives just as much—or even more—than people with much higher consumption levels?

One way to find out is to simply ask. Subjective views are an important way of measuring welfare.

This is what the Gallup Organization did. The Gallup World Poll asked people around the world what they thought about their standard of living—not only about their income. The following chart compares the answers of people in different countries with the average income in those countries. It shows that, broadly speaking, people living in poorer countries tend to be less satisfied with their living standards.

Dissatisfaction with standard of living vs GDP per capita

This suggests that economic prosperity is not a vain, unimportant goal but rather a means for a better life. The correlation between rising incomes and higher self-reported life satisfaction is shown in our entry on happiness.

This is more than a technical point about how to measure welfare. It is an assertion that matters for how we understand and interpret development.

First, the smooth relationship between income and subjective well-being highlights the difficulties that arise from using a fixed threshold above which people are abruptly considered to be non-poor. In reality, subjective well-being does not suddenly improve above any given poverty line. This makes using a fixed poverty line to define destitution as a binary ‘yes/no’ problematic. Therefore, while the International Poverty Line is useful for understanding the changes in living conditions of the very poorest of the world, we must also take into account higher poverty lines reflecting the fact that living conditions at higher thresholds can still be destitute.

And second, the fact that people with very low incomes tend to be dissatisfied with their living standards shows that it would be incorrect to take a romantic view on what ‘life in poverty’ is like. As the data shows, there is just no empirical evidence that would suggest that living with very low consumption levels is romantic.

More here.

Chocolate Can Protect Our Brains

Sheherzad Preisler in OliveOilTimes:

ImagesA research team based at Italy’s University of L’Aquila have published a new study that says cocoa beans contain high concentrations of flavanols, which are naturally-occurring compounds that can protect our brains. The team, whose findings were published in Frontiers in Nutrition, reviewed current scientific literature in the hopes of finding out if the sustained concentrations of cocoa flavanols found in regular chocolate-eaters had any effect on the brain. What the team found was a breadth of trials in which participants that regularly consumed chocolate processed visual information better and had improved “working memories.” Furthermore, women who consumed cocoa after a sleepless night saw a reversal of negative side effects that come from sleep deprivation, such as compromised task performance. This could be great for those who work particularly stressful jobs that compromise one’s sleep as well as those with recurring sleep issues.

Diets such as the Mediterranean diet encourage the consumption of chocolate in moderation, and this study further supports such suggestions. However, the results should be taken with a grain of salt: the positive effects from cocoa flavanols differed based on the variety of the mental tests. For young adults who were in good health, they needed a very intense cognition test to expose cocoa’s immediate benefits. Most research on this subject to date generally involves elderly populations who have consumed cocoa flavanols from anywhere between five days and three months. For this population, daily consumption of cocoa flavanols had the most positive profound effect on their cognition, improving their verbal fluency, processing speed, and attention span. The benefits were most noticeable in subjects whose cognitive abilities had minor damage or whose memories had previously begun to decline.

More here.

Two people drive drunk at night: one kills a pedestrian, one doesn’t. Does the unlucky killer deserve more blame or not?

Robert J Hartman in Aeon:

ScreenHunter_2767 Jul. 25 23.15There is a contradiction in our ordinary ideas about moral responsibility. Let’s explore it by considering two examples. Killer, our first character, is at a party and drives home drunk. At a certain point in her journey, she swerves, hits the curb, and kills a pedestrian who was on the curb. Merely Reckless, our second character, is in every way exactly like Killer but, when she swerves and hits a curb, she kills no one. There wasn’t a pedestrian on the curb for her to kill. The difference between Killer and Merely Reckless is a matter of luck.

Does Killer deserve more blame – that is, resentment and indignation – than Merely Reckless? Or, do Killer and Merely Reckless deserve the same degree of blame? We feel a pull to answer ‘yes’ to both questions. Let’s consider why.

On the one hand, we believe that Killer deserves more blame than Merely Reckless, because it’s only Killer who causes the death of a pedestrian. Plausibly, a person can deserve extra blame for a bad result of her action that she reasonably could have been expected to foresee, and causing the death of a pedestrian by driving drunk is that kind of bad consequence. So, even though they deserve an equal degree of blame for their callous and reckless driving, Killer deserves more blame overall, because only Killer’s foreseeable moral risk turns out badly.

On the other hand, we believe that Killer and Merely Reckless must deserve the same degree of blame, because luck is the only difference between them, and luck, most of us think, cannot affect the praise and blame a person deserves. It would be unfair for Killer to deserve more blame due merely to what happened to her, because moral judgment is about a person and not what happens to her. So, they must deserve the same degree of blame.

In summary, our commonsense ideas about moral responsibility imply the contradiction that Killer and Merely Reckless do and do not deserve the same amount of resentment and indignation. More generally, our commonsense ideas about moral responsibility have the paradoxical implication that luck in results can and cannot affect how much praise and blame a person deserves.

Nevertheless, the vexation runs deeper. Luck clearly affects the results of actions but, less obviously, as I’ll demonstrate, luck can also affect actions themselves.

More here.

10,000 Hours With Claude Shannon: How A Genius Thinks, Works, and Lives

Rob Goodman and Jimmy Soni in The Mission:

1-gRoSWX311voQrBgwa9iuCAFor the last five years, we lived with one of the most brilliant people on the planet.

Sort of.

See, we just published the biography of Dr. Claude Shannon. He’s the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton.

We spent five years with him. It’s not an exaggeration to say that, during that period, we spent more time with the deceased Claude Shannon than we have with many of our living friends. He became something like the roommate in the spare bedroom of our minds, the guy who was always hanging around and occupying our head space.

Yes, we were the ones telling his story, but in telling it, he affected us, too. Geniuses have a unique way of engaging with the world, and if you spend enough time examining their habits, you discover the behaviors behind their brilliance. Whether or not we intended it to, understanding Claude Shannon’s life gave us lessons on how to better live our own.

That’s what follows in this essay. It’s the good stuff our roommate left behind.

More here.

‘Make It So’: ‘Star Trek’ and Its Debt to Revolutionary Socialism


A.M. Gittlitz in the NYT:

Gorky was a fan of the Cosmism of Nikolai Fyodorov and Konstantin Tsiolkovsky, a scientific and mystical philosophy proposing space exploration and human immortality. When Lenin died four years after meeting with Wells, the futurist poet Vladimir Mayakovsky’s line “Lenin Lived, Lenin Lives, Lenin Will Live Forever!” became not only a state slogan, but also a scientific goal. These Biocosmist-Immortalists, as they were known, believed that socialist scientists, freed from the constraints of the capitalist profit motive, would discover how to abolish death and bring back their comrades. Lenin’s corpse remains preserved for the occasion.

Bogdanov died in the course of his blood-sharing experiments, and other futurist dreams were sidelined by the industrial and militarist priorities that led up to World War II. In the postwar period, however, scientists inspired by Cosmism launched Sputnik. The satellite’s faint blinking in the night sky signaled an era of immense human potential to escape all limitations natural and political, with the equal probability of destroying everything in a matter of hours.

Feeding on this tension, science fiction and futurism entered their “golden age” by the 1950s and ’60s, both predicting the bright future that would replace the Cold War. Technological advances would automate society; the necessity of work would fade away. Industrial wealth would be distributed as a universal basic income, and an age of leisure and vitality would follow. Humans would continue to voyage into space, creating off-Earth colonies and perhaps making new, extraterrestrial friends in the process. In a rare 1966 collaboration across the Iron Curtain, the astronomer Carl Sagan co-wrote “Intelligent Life in the Universe” with Iosif Shklovosky. This work of astrobiological optimism proposed that humans attempt to contact their galactic neighbors.

More here.

criticizing rorty’s critics

PhpThumb_generated_thumbnailMaría Pía Lara at the LARB:

Why then was Rorty ever considered a relativist? Here is one answer: Throughout his career, Rorty was against prescriptions, against thinking that he could provide us with universal foundations or discoveries. Instead, he sought to recover the successes of labor unions and other leftist organizations. This included younger leftists, who engaged in social disobedience, after seeing anticommunism being used as an excuse to destroy innocent people in southeast Asia. Rorty maintained that the killing of civilians and soldiers in Vietnam was morally indefensible and that the war had ended up degrading the morals of the United States. Moreover, he claimed, that the political effectiveness of the antiwar movements would give hope to future generations.

Rorty often cited the contributions of pragmatists like John Dewey or William James, whose essays he compared to Walt Whitman’s poetry, because they were aware that it is in the making of something — a movement, a concept, a turn of a phrase to describe our world — rather than in finding “truths,” that we articulate social and political changes for the better. He observed that both writers believed that “democracy” and “the project of America” was “a political construction” and could be taken as “convertible,” that is, “equivalent” terms.

more here.

What does Jane Austen mean to you?

P7_BraggGeoff Dyer and many others at the TLS:

We did Emma for A Level, so it was one of the first serious novels I ever read. In a sense, then, Jane Austen is literature to me. She was not just one of the first novelists I read but also the oldest, i.e. earliest. You can start further back, of course, but romping through Tom Jones feels like a bit of a waste of olde time in the way that Persuasion never does. I associate reading Austen with a consciousness of the gap between my limited life experience – swilling beer, basically – and the expanded grasp of the psychological subtleties and nuances of situations and relationships that her books gradually revealed. But I’m conscious also of a different kind of gap: that between the riches afforded by the novels and the tedium of the criticism served up alongside them. Macmillan Casebooks – anthologies of critical essays – were the default educational tools even though most of the pieces in the one on Emma are complete dross. The process whereby “doing English” morphed into “doing criticism” began with Austen and continued all the way through university. Was this a purposeful deterrent? George Steiner is right: the best critical essay on Jane Austen is Middlemarch.

Whereas my head is full of Shakespeare, only a few lines from Austen have stayed with me – the very ones, predictably, that had us smirking at school: “Anne had always found such a style of intercourse highly imprudent” (Persuasion), or Mr Elton “making violent love” to Emma in a carriage.

more here.

was Billy Budd black?

2017_29_melville_openerPhilip Hoare at The New Statesman:

Was Billy Budd, the Handsome Sailor at the heart of the book, black? Scholars such as John Bryant believe that there is internal evidence in the manuscript of the book – found in a bread tin after Melville’s death in 1891 and not published until 1924 – that the author had played with the idea of making his hero a man of African heritage. Billy is loved by all the crew and is described as blond and blue-eyed later in the story. Yet the sensuous descriptions of the Liverpool sailor and the Greenwich veteran elide to create a counterfactual version in which Billy becomes a black star at the centre of his constellation of shipmates.

Indeed, some critics – most notably, Cassandra Pybus at the University of Sydney – have suggested that another 19th-century anti-hero was a person of colour. In Wuthering Heights, published in 1847, two years before Melville’s visit, Heathcliff is described as a “regular black”, an orphan found in the Liverpool docks – an intriguing notion explored in Andrea Arnold’s brilliant 2011 film adaptation.

Melville witnessed great changes in the fortunes of black Americans. Moby-Dick is an allegory of the struggle against slavery in the run-up to the American Civil War; the Melville scholar Robert K Wallace believes that the writer heard the fugitive slave-turned-emancipationist Frederick Douglass speak in the 1840s and that they may have even met.

more here.

Tuesday Poem


o aluminium roll,
o silver scroll

confined in
this cupboard,

bound in cardboard,
restrained behind

a jagged blade that tears
lengths away to mute

the bowls and
jars of the fridge —

o small,
spare life.

I would free it, the tinfoil. I’d lift it from its cabinet and make
a river of it, a smooth, grey sheen released through the house.

At the summit of the stairs, the source would spurt up
from Gougane Barra, setting a mountain stream to gush, and I’d lift it

and give it a push, I’d let the bright waters of the Lee flow down
the slope, to run a silver ribbon through the hall. Under the bridge

of a couch, I’d watch shadows of salmon and brown trout swim in
and out of riverweed. On a moonlit night, a man might stand there

with his son, the light of their torches poaching the waters. If the child
whispered “Oh look, the river’s smooth as tin foil!” his father

would hush him quickly, finger to lip, and turn to choose a hook.
The waters would surge onwards, swirling under doors to the city

-kitchen. Where gulls screech and shriek high, I would thrust swifter
currents that’d make islands of table legs and riverbanks of walls.

I’d give the river a voice to hum through the culverts that run under
cupboards, making of itself a lilting city song, its waters speckled

with gloom-shadows of mullet. I would put a single seal there,
lost, and make a red-haired girl the only person who’d see him.

I would tug that river back, then, the weight of all its stories
dragging after it, and haul it in loud armfuls all the way back to me.


torn, I’d
fold it, then,

and close
it back in

its press

by Doireann Ní Ghríofa
from Oighear
publisher: Coiscéim, Dublin, 2017

Translation: 2017, Doireann Ní Ghríofa
First published on Poetry International, 2017

The New Science of Daydreaming

Michael Harris in Discover:

Day“I’m sorry, Julie, but it’s just a fact — people are terrified of being in their heads,” I say. “I read this study where subjects chose to give themselves electric shocks rather than be alone with their own thoughts.” It’s the summer of 2015 and the University of British Columbia’s half-vacated grounds droop with bloom. Julie — an old friend I’ve run into on campus — gives me a skeptical side-eye and says she’s perfectly capable of being alone with her thoughts. Proving her point, she wanders out of the rose garden in search of caffeine. I glower at the plants. The study was a real one. It was published in 2014 in Science and was authored by University of Virginia professor Timothy D. Wilson and his team. Their research revealed that, left in our own company, most of us start to lose it after six to 15 minutes. The shocks are preferable, despite the pain, because anything — anything — is better than what the human brain starts getting up to when left to its own devices.

Or so we assume.

What the brain in fact gets up to in the absence of antagonizing external stimuli (buzzing phones, chirping people) is daydreaming. I am purposefully making it sound benign. Daydreaming is such a soft term. And yet it refers to a state of mind that most of us — myself included — have learned to suppress like a dirty thought. Perhaps we suppress it out of fear that daydreaming is related to the sin of idle hands. From at least medieval times onward, there’s been a steady campaign against idleness, that instigator of evil. Today, in the spaces where I used to daydream, those interstitial moments on a bus, in the shower, or out on a walk, I’m hounded by a guilt and quiet desperation — a panicked need to block my mind from wandering too long on its own. The mind must be put to use.

More here.

England’s Mental Health Experiment: It makes economic sense

Benedict Carey in The New York Times:

England is in the midst of a unique national experiment, the world’s most ambitious effort to treat depression, anxiety and other common mental illnesses.

MentalThe rapidly growing initiative, which has gotten little publicity outside the country, offers virtually open-ended talk therapy free of charge at clinics throughout the country: in remote farming villages, industrial suburbs, isolated immigrant communities and high-end enclaves. The goal is to eventually create a system of primary care for mental health not just for England but for all of Britain. At a time when many nations are debating large-scale reforms to mental health care, researchers and policy makers are looking hard at England’s experience, sizing up both its popularity and its limitations. Mental health care systems vary widely across the Western world, but none have gone nearly so far to provide open-ended access to talk therapies backed by hard evidence. Experts say the English program is the first broad real-world test of treatments that have been studied mostly in carefully controlled lab conditions. The demand in the first several years has been so strong it has strained the program’s resources. According to the latest figures, the program now screens nearly a million people a year, and the number of adults in England who have recently received some mental health treatment has jumped to one in three from one in four and is expected to continue to grow. Mental health professionals also say the program has gone a long way to shrink the stigma of psychotherapy in a nation culturally steeped in stoicism. “You now actually hear young people say, ‘I might go and get some therapy for this,’” said Dr. Tim Kendall, the clinical director for mental health for the National Health Service. “You’d never, ever hear people in this country say that out in public before.”

The enormous amount of data collected through the program has shown the importance of a quick response after a person’s initial call and of a triage-like screening system in deciding a course of treatment. It will potentially help researchers and policy makers around the world to determine which reforms can work — and which most likely will not. “It’s not just that they’re enhancing access to care, but that they’re being accountable for the care that’s delivered,” said Karen Cohen, chief executive of the Canadian Psychological Association, which has been advocating a similar system in Canada. “That is what makes the effort so innovative and extraordinary.”

More here.