Monday, November 30, 2015
Heather Ackroyd and Dan Harvey. Mother and Child, 2001.
" ... By projecting a negative image onto a patch of grass under controlled light, Ackroyd & Harvey use grass's photosynthetic abilities to create living images. ..."
Feeling the Love
by Maniza Naqvi
This is about the biography of an American woman who was the author of key militant interpretations and texts which are followed by extremists today. She left New York in the 1960s and went to Egypt and then from there came to live in Lahore with the founder of the Jamaat e Islami Maulana Maududi. She lived in Lahore till she died in 2012 at the age of seventy- eight. This is also about an Austro-Hungarian man who became the spiritual advisor of the House of Saud. It is about Maryam Jameela, an American woman whose given name by her parents was Margaret Marcus and an Austro Hungarian man— Muhammad Asad whose given name was Leopold Weiss. It is about them then, and it is about us now.
The first time I saw razor wire was along the checkpoint at Eretz in Gaza, a dozen years and change ago. And then more recently I'd seen it coiled atop the compound walls of the offices and homes of donor agencies in Addis Ababa, a city changing fast with shanty towns being mowed down to create overpasses and underpasses, Malls and residential paradises for the purchasing powerful foreigners and visiting and returning diaspora. And I'd seen it at the peripheries of a holiday lakeside resort in Malawi built on land grabbed from fishermen, razor wire, presumably to keep the animals out. Now I stared at the knife's edge of gobs of razor wire at ground level in Lahore. And the last time I'd seen such an array of foreign corporate journalists passing through the barricades to speak to the local citizens as the notable and primary experts on that country, on just about every aspect of it, was well—in Bosnia. Now here they were, ‘conflict' experts, some the same, doing the same thing in Lahore.
Large coils of razor wire barricades snaked around the circumference of the venue for the Lahore Literary Festival as a protective barrier against possible attackers. A barricade of police vans and police men provided further deterrents and reinforcements. Snipers sat on the roof top. This didn't give me any comfort. Security guards don't translate into security—or secure progressive thinking. Was the space for art and literature being protected or prisoned?
A bomb had exploded in the vicinity near a police station and a shrine not far from the venue of the festival two days prior to the start of the literature festival. The Punjab government had refused to provide security for the festival citing too great a security risk till 11.30 p.m. on the eve of the festival when the resolute organizers prevailed and demanded that the Government not abscond from its responsibility and provide full security for the event.
Only a day earlier another imam bargah had been bombed this time outside Islamabad a few hundred miles north. A month earlier a bomb had exploded in an imam bargah in Shikarpur thousands of miles to the south. A month before that a school in Peshawar had been attacked killing one hundred and forty kids. Hundreds of lives had been taken in just these attacks alone. With the dull grim regularity of every few weeks a terrible attack occurred on innocent civilians either worshipping, studying, rejoicing or grieving.
In Pakistan targeted attacks in February 2015 were on the increase. Thousands of lives—at least a hundred thousand Pakistani lives have been taken by such attacks in the last fifteen years. No one colors their Facebook pages in our colors—no arms linked together in parades of the coalition of the willing happen in our streets no showering of sympathy. Just drones bombs and bullets.
Daily threats and fear stalk everyone and people are clearly frightened. Invisible blasphemy accusers seem to be prowling the streets in search of and stalking their targets. A bullet or a bomb could strike anywhere at any time. And the Government seems to provide more security to those who spew hate and encourage such heinous crimes then it does to the victims.
Yet in February of this year, in the face of such fear and terror the Lahore Literary Festival and the Karachi Literature Festival steeled themselves and stoically went forward. A small group and yes class of people and arguably with less than three degrees of separation from each other by marriage or by birth, held the fort: human rights activists, politicians, bureaucrats, ex-military officers, writers, poets, actors, musicians and artists, or all of the above all in one. We filled a Green room. And the audience seats. And session after session for three days we manned the figurative barricades, so though the literal ones-- razor wire barricades were left to the less socially connected and more heavily armed policemen and policewomen of a much lesser pay grade. We felt as if we were comrades in pens and were determined to resist attempts to color the narrative for Pakistan as being one terrorized by sectarian violence and threats, refusing to give in to such repressive thinking. Inside the barricaded and sniper secured festival there was the camaraderie akin to inmates in a large bunker. A sense of euphoria of defiance mixed with counting the moments till the final session and a safe and uneventful event coming to a close. In the Greenroom, the delegates mingled with each other over cups of tea and coffee and a sumptuous lunch buffet of many a Lahori delicacy. And there were gifts for the guests—exquisite designer earrings for the women and even more exquisite cuff links for the men.
It was in this atmosphere and context of high tension and stress and of ‘It's all good till it isn't' that I was requested to and agreed to be in conversation with an American author for her book which was to be launched at the Festival. I was on two panels already and without thinking I jumped and accepted this---one more session. More exposure! Great. I said yes and asked questions later. Later that day I googled, the author, Deborah Baker, and her book.
I found to my consternation that the book was called The Convert: A Tale of Exile and Extremism. The book, is about a young American woman who converted to Islam and came to live in Lahore in 1962 and never left and died there in 2012 at the age of seventy-eight.
The book is the biography of an American. Maryam Jameelah or Margaret Marcus told through her own letters to her parents somewhat reedited by Deborah Baker. Praise for the Convert includes a blurb on the cover"The Convertis the most brilliant and moving book written about Islam and the West since 9/11." —Ahmed Rashid.
I had never heard of Maryam Jameelah before. Nevertheless, now here I was about to participate in a conversation about her with her biographer Deborah Baker. I was about to be educated. Most people I knew whom I subsequently asked if they knew anything about Maryam Jameelah had never heard of her either. And when I told them about what I had learned it only confirmed their long held assertions that throughout Pakistan's history the religious parties and this one in particular had been supported by the Americans.
Worried, I had asked the organizers about the appropriateness of someone like me conducting this conversation given the context of extremism and its threatening stance in the country. I tried to get out of the session during a phone conversation with one of the wonderful organizers. But instead I agreed to do it and in a way I trapped myself. I felt that I was letting everyone down by voicing such a misgiving of fear. I felt that I would appear to be a coward not a team player if I didn't agree to going ahead and doing this. And in a way by being such a coward I was showing that I was not worthy of being invited again.
Ahmed Rashid, a member of the core organizers of the Lahore Literary Festival, tried to allay my fears and kindly reassured me when I expressed my concern. It would be fine and that this was a good book to be in conversation about. I would enjoy it he said cheerfully. ‘It's an important book. Most interesting. Stay away from talking about religion. Keep the conversation about the two different social and cultural worlds that she bridged and experienced. ‘Talk about bridging cultures! Just don't talk about religion. You'll be alright.'
In preparation for the session in addition to of course reading the book I had found two articles one written by Deborah Baker for the Paris Review (here) and the other a review of her book by the New York Times (here). The New York Times said that: ‘Deborah Baker is a serious biographer who specializes in fairly crazy writers.' And later in the review, this: ‘Baker sidesteps one of the book's most crucial questions: "Was Maryam Jameelah a schizophrenic? I couldn't say." Yet the letters led me to believe she was. Baker mentions that Jameelah was medicated with Compazine, but blurs the implications when she omits that it's prescribed for schizophrenia. She also leaves out instances when Jameelah unambiguously acknowledges why she takes the anti-schizophrenic medication Thorazine. In a letter of Sept. 15, 1981, for example, Jameelah wrote: "I have to take Thorazine every night. I know if I stop taking it, I will soon relapse into the same condition I was before I went to the hospital both in New York and Lahore."
I had promised my family in New York that I wasn't going to do anything foolish or foolhardy while I was in Lahore, attending the Festival in such dangerous circumstances was already cause enough for worry. Every single relatives and friend in Karachi and Lahore had expressed their own high level of stress over the sectarian danger stalking the country. Now, I was going to be in conversation with Deborah Baker about her biography of Maryam Jameelah aka Margaret Marcus, the daughter of, as Baker puts it ‘Secular Zionist Jews' who converted to Islam. In 1962, after converting to Islam and after years of correspondence with Maulana Maududi, Margaret Marcus now Maryam Jameelah came to Lahore and never ever returned to the United States again. She had written extremist texts and was the adopted daughter of Maulana Maududi the founder of the Jamaat e Islami. She died a few years ago in Lahore. Deborah Baker had accomplished the astonishing feat of bringing her back to life in her biography. It took Deborah Baker to enlighten me about Maryam Jameelah and Jamaat e Islami.
I was scared to get into a conversation about this book and that too on a stage in Pakistan in February 2015 when bombs and bullets were killing people who were of my ilk. And even though friends and my own good sense, were telling me to politely back out—I went ahead. Why? Because I was far too terrified of being left out of the Festival, shunned by the literati. Can you imagine?
Deborah and I exchanged several emails before met over tea at one of the Lahore Gymkhana's cafés overlooking the lush green Golf course, on a chilly afternoon before the festival began. She signed off as Deb. She was from New York. She lives in Delhi with her husband, also an author, Amitav Ghosh. She said she half suspected she had been invited because the organizers really wanted her husband to come too. I think she was being too self- effacing and under estimated her own talent. I didn't tell her that I half suspected that I was asked to do this session because no one else wanted to-given the terror threat level in the country. I expressed my concern about discussing such a book given the tense context and I was unable to convey my sense of foreboding and fear. She said I might be over reading the threatening environment.
Deb, pretty in her colorful eyewear, her features to me a charming cross between Elizabeth Warren and Mary Tyler Moore --- was attired much in the same way as I was, dressed for the part, a shawl, long kurta, over tights. We had an hour to talk and then Deb was off to an art opening in town. Admirably, Deb seemed to have more access to Lahore's art and intellectual glitterati then I did though she had been to Pakistan only a few hours and only once before this visit—for two weeks a few years ago when she came to Lahore to interview Maryam Jameelah.
I got the opportunity to discuss Maryam Jameelah with the very erudite Khalid Ahmed who knew a lot about her and in fact said he was distantly related to her through the man she married. He told me that Imran Khan, the cricketer turned Islamist politician, too was related to the man Maryam Jameelah ended up marrying. I had a chance to talk to Ahmed sahib about this and other things before another session on which I was on the panel and which he was moderating called Writing about Cityscapes.
While in Lahore I came to know about the translation of the Koran by an Austro-Hungarian journalist Leopold Weiss who in the 1920s traveled to Saudi Arabia –probably writing travelogues and who became close to Ibn Saud the founder of Saudi Arabia and the patriarch of the House of Saud. Leopold Weiss changed his name to Muhammad Asad. His translation of the Koran is au current in Lahore's elite circles now busy studying the Koran. There is something to be said about how the colonized elite— are able only to respect, read or speak the languages of the colonizer—they respect only what is presented to them in translation---and who can only read their key cultural and religious texts when it is translated into English or French by the colonizers. Their own heritage is presented to them through the optic of an outsider and garbled and is consumed by them whole. But that may be a different discussion.
But anyway, last February, in Lahore, a day before our session, Deb and I chatted briefly about the dinner party for the Festival participants, the night before. She told me she had had a harrowing encounter. An older gentleman, dressed to the nines, had started to choke and fainted and fallen right into the lap of a seated Roger Cohen of the New York Times and Deborah Baker. "Oh no! I exclaimed, worried for the man. ‘I know right?' She said ‘I thought for a moment we were being attacked It was all so sudden' Deb had gotten a hold of herself instantly and tried to help the ailing man and tried to loosen his tie and make him comfortable, before the medical aid arrived. She said Roger felt he should take the man's shoes off too--to make him more comfortable. But when the gentleman, came to, he was most disturbed to be shoeless and kept asking for his shoes. He was so embarrassed to be without his shoes. We made our way to our session hall and waited for the room to fill up. Once it was full we began right on time. I introduced Deborah using the New York Times description of her. Deb began her conversation as to why she wrote this book and chose this subject.
At the session, in conversation with Deb, I quoted from her essay in the Paris Review in which she had said that she had been searching for answers for 9-11. I referred to Romila Thapar the pre-eminent historian's key note address at the opening session for the festival in which she had cautioned us about the dangers of ‘contextualizing the past with the present.'
Deb told the audience how she found her subject—while flipping through the name cards in the New York Library archives one day a few years after September 2001. She found the unusual name Maryam Jameelah amongst the otherwise all Anglo Saxon and European names. These letters archived at the Library were a biographer's treasure trove of boxes of Maryam Jameelah's letters to her parents from Pakistan.
I listened anxiously. I was apprehensive that any word out of place by either of us could be misconstrued by anyone in the audience who could take umbrage to anything that was said. I thought of the snipers on the roof top of the Alhamra, razor wire all around it, and metal detectors at the entrance to every hall. I thought of the invisible snipers on the prowl targeting and killing people with my background. Lahore had a track record of extremist guards killing the guarded. And here I was in conversation on The Convert.
I kept surveying the room. Gauging the audience and individual faces. Some faces in the front row were impossible to scrutinize, covered as they were by niqabs. I let Deb talk on about her motivation for writing the book and about its content.
My interventions were only to refer to Romila Thapar's remark about the danger of contextualizing the past with the present; the fact that Maryam Jameelah was clinically unstable and most probably schizophrenic and on heavy duty medication according to the Times article and why had she left this out of her book, and to ask how was it possible to look for explanations for 9-11 in an individual of this nature. I intervened once during the question session when a member of the audience wanted to know why Americans were so negative about Islam—I said we cannot ask Deborah Baker, who is just one individual to explain 350 million people or expect that all 350 million people think alike.
I asked ‘Does a biographer end up really only writing an autobiography?'
As soon as the session was over, Deb was surrounded by a crowd of people. I left the hall with several friends of mine. In the evening at about six thirty just as the festival was ending I stood on the lawn and sighed with relief that all had gone on peacefully and safely at the three day event despite the insecurity and sense of dread—the stalking danger. I thought about making my way out of the venue and moving as far away as possible from the barbed wire vicinity.
Deb caught up with me just then on the lawn,—a couple of security policemen with automatic weapons stood nearby—there were snipers overhead on the roof top--- ----she looked so relaxed and content as she took both my hands in hers. Deb said how happy she was with our session and how well I had conducted it. I said I was glad it had gone well and again with a shudder I expressed the sense of anxiety and fear I had felt throughout the session. She laughed and said, I was over stating it.
Pressing my hands in hers she said warmly, "Couldn't you just feel the love in the room?"
Monday, November 23, 2015
Wally Gilbert. Difference #2. 2015.
Monday, November 16, 2015
Are We Witnessing a Major Shift in America's Two-Party System?
by Akim Reinhardt
In the 150 years since the end of the U.S. Civil War, the Republicans and Democrats have maintained a relentless stranglehold on every level of American politics nearly everywhere at all times. While a handful of upstart third parties and independent candidates have periodically made waves, none has ever come close to capturing the White House, or earned more than a brief smattering of Congressional seats. Likewise, nearly ever state and local government has remained under the duopoly's exclusive domain.
Why a duopoly? Probably because of they way the U.S. electoral system is structured. Duverger's Law tells us that a two-party duopoly is the very likely outcome when each voter gets one vote and can cast it for just one candidate to determine a single legislative seat.
However, in order to maintain absolute control of American politics and fend off challenges from pesky third parties, the Democrats and Republicans needed to remain somewhat agile. The times change, and in the endless quest to crest 50%, the parties must change with them.
Since the Civil War, both parties have shown themselves flexible enough to roll with the changes. The Civil War, the Great Depression, and Civil Rights era each upended the political landscape, leading political constituencies to shift, and forcing the Democrats and Republicans to substantially and permanently reorient themselves.
Now, several decades removed from the last major reshuffling of the two major parties, we may be witnessing yet another major transformation of the duopoly as the elephant and the donkey struggle to remain relevant amid important social changes. The convulsions of such a shift are reflected in the tumultuous spectacle of the parties' presidential nomination processes.
The Republicans are in a state of disarray, with inexperienced outsiders currently leading the pack while career politicians struggle to find their way. Meanwhile, the presumptive Democratic nominee, Hillary Clinton, also faces a serious threat from an outsider, independent socialist Bernie Sanders.
Personally I very much doubt that an outsider such as Sanders, Donald Trump, or Ben Carson will emerge to claim the nomination of either party. Party nominating procedures and the oceans of money flowing to mainstream candidates make it rather unlikely. However, these outsiders' surprising successes thus far may be an indication of something greater than their own charisma. It may very well signal the fourth major shift in America's two-party system since the Civil War.
How and why is such a shift occurring? And what might the two parties look like after the dust settles? To answer those questions, we should be begin with a brief history of the duopoly itself.
The 1st Iteration: 1854-1932
The first iteration of the Democratic-Republican duopoly emerged from the tumult of the Civil War era and lasted until the Great Depression. For nearly 80 years, the duopoly was framed by sectional divide as the nation lingered in the long shadow of the Civil War. The Democrats were the party of the South and the Republicans were the party of North
As Southern states re-entered the union during the Reconstruction Era (1865-77), white supremacists "redeemed" one Southern state after another; they used intimidation and violence to suppress the new African American franchise, to scare their white sympathizers out of the G.O.P., and to establish the South as a one-party region.
Later, white Southerners would employ legalistic tactics to codify the widespread disenfranchisement of black voters, and white Democrats would rule the South virtually unchallenged for a century. Along the way, they built a brutal system of Jim Crow apartheid based on the economic exploitation, social segregation, cultural denigration, and political oppression of African Americans.
Meanwhile, Republican interest in African American affairs had largely faded by the end of Reconstruction. A strain of moralism remained in the party, taking various forms, but business concerns came to dominate the G.O.P. at the national level. During the ensuing decades, Republicans supported business interests of the industrial North by promoting various protectionist tariffs and championing a tight currency to benefit creditors. The growing ranks of Northern urban industrial wage workers could often be brought to heel on this issue because they feared the rising prices that would result from an inflationary monetary policy.
All the while, Northerners continued to stew in their resentment over the war, and the Republican Party deftly used this to its advantage in dominating politics above the Mason-Dixon line. In fact, rallying a Republican electoral campaign around fiery rhetoric about the Civil War became so commonplace during the back half of the 19th century that the tactic earned a nickname: Waving the Bloody Shirt.
As the Democrats maintained their monopoly on the South, they also made modest but important inroads in the North. Urban Democrats appealed to urban immigrants who resented the patronizing bigotry of Victorian Republicans. By the end of the 19th century, cities like New York and Boston were dominated by immigrant-backed Democrats.
However, the emergence of a nascent Northern Democratic wing wasn't enough to prevent Republican dominance in national politics. With the exception of New York City, cities weren't big enough to swing states. Nearly every Northern state was reliably Republican; Ohio and New York were the only important swing states. And since the North had a far greater populace than the South, the results were predictable.
From 1860-1908, the Democrats were able to muster only one successful presidential candidate: former New York Governor Grover Cleveland, who won non-consecutive elections in 1884 and1892.
The Democrats broke through again in 1912 with Woodrow Wilson, but this was only possible because former Republican president Theodore Roosevelt broke away from the G.O.P. and ran a third party ticket against the Republican incumbent, William Taft, thereby splitting the Republican vote. When Wilson finished up his two terms in 1921, Republican dominance of the White House resumed at an unprecedented level with three successive and resounding victories by Warren Harding, Calvin Coolidge, and Herbert Hoover.
After Hoover's 1928 drubbing of New York Democrat Al Smith, some pundits wondered if the Democrats were still a relevant national party. After all, in a span of 68 years, they had fielded just two presidents, and their defeats of the 1920s were staggering, with Republicans repeatedly setting records for margin of electoral victory. And after severe immigration restriction was enacted in 1924, Democrats' primary opportunity to expand in the North seemed cauterized.
Such concerns evaporated with the outbreak of the Great Depression in late 1929, and the popular tidal wave that brought Democrat Franklin Roosevelt to the White House in 1932. He would go on to win four consecutive elections, and his rise to power marks the second iteration of the Democratic-Republican duopoly.
The 2nd Iteration: 1932-1950
If the catastrophe of civil war had established the first iteration of the Republican-Democratic duopoly, then the calamity of the Great Depression created an opportunity for the master politician Franklin Roosevelt to usher in the second iteration. For while his initial victory in 1932 was largely a result of Herbert Hoover's deep unpopularity, his 1936 campaign reshaped the Democratic Party for decades to come, and by default, greatly impacted the Republicans too.
In 1932, Roosevelt had run as a centrist against Hoover's dogmatic laissez-faire conservativism. FDR was touted as the man who saved capitalism from itself. However, by 1936, he learned the lesson that on some level seemed to elude Barack Obama: It is nearly impossible to broker reasonable compromises with extremists.
So instead of continuing to extend the olive branch to conservatives, only to see them continually snap it and sneer, FDR comfortably settled into his progressive base and forged what came to be known as the Roosevelt Coalition. The Solid South, as the Democratic South was known, would be complemented by four major constituencies in the North:
-African Americans: FDR hadn't excluded them from the New Deal, and they were grateful.
-Urbanites: FDR expanded the old Democratic immigrant vote with a progressive agenda. For example, he championed the repeal of prohibition.
-Labor: FDR promoted legislation that expanded union rolls, and union members repaid him with votes.
-Farmers: Various New Deal programs helped rural America. This would prove to be the weakest link in the chain.
From among these core, Northern Democratic constituencies, black voters offer them most dramatic example of the shift that had taken place. In 1932, Hoover claimed only 39% of the total popular vote, but 75% of the African American vote, as blacks remained loyal to the party of Lincoln. Just four years later, Roosevelt the Democrat captured 75% of the black vote.
The second iteration of the Republican-Democratic duopoly was now set. In addition to maintaining the Solid South, Democrats were the party of America's urban working class: white ethnics descended from European immigrants who had arrived between 1880-1920; African Americans who fled the rural, Jim Crow South for Northern industrial jobs; and a rising tide of organized labor among the working classes.
The Republicans, meanwhile, catered to the established white population outside the South. They remained the party of business interests and moralism. Their primary demographic was middle class, white Protestants. In other words, the Republican Party appealed the majority of Americans not in the South or the industrial cities of the Northeast and Midwest.
This competitive balance defined the Democratic-Republican duopoly for several decades after World War II. Both parties were conservative in foreign affairs during the Cold War. On domestic issues, the Democrats followed FDR's lead, pushing a progressive agenda that reached its climax with Lyndon Johnson's Great Society (1963-69), while Republicans held a center-right position. Scarred by the Great Depression, both parties accepted the basic tenets of Keynesian economics, believing the government had a role to play in moderating the boom/bust economic cycle. There were real differences between the two parties, but the distance between them was not vast.
Of the eight presidential elections following Franklin Roosevelt's reign, each party one four.
However, by the late 1960s, this second iteration was of the duopoly began was beginning to crumble under the weight of major demographic, social, and economic shifts. The person who best took advantage of those changes was Ronald Reagan.
The 3rd Iteration: 1980-?
By the late 1960s, three main factors had begun to wreak havoc on the Democrats' old Roosevelt coalition: deindustrialization, suburbanization, and civil rights.
The decline of America's manufacturing economy depleted the ranks of union workers specifically and blue collar workers more generally. It also greatly damaged the cities of the industrial Northeast and Midwest.
Just as cities began to deteriorate, a massive housing boom on their rural edges was subsidized by federal agencies such as the Veteran's Administration (via the G.I. Bill) and the Federal Housing Authority. Farms morphed into suburbs, which then siphoned off populations from nearby cities. And these new suburban voters were up for grabs.
As cities lost population, capital, and tax revenues, crime skyrocketed and city services crumbled. White flight escalated in the 1970s and 1980s, and the white middle class largely vacated urban America for the new suburbs.
The other trend that radically reshaped American demography was a massive westward migration that had begun during World War II and continued unabated in the decades that followed. Millions of Americans left the East for the growing industrial sector along the Pacific coast, and to a lesser extent the Southwest.
However, post-WWII growth in Western cities like Los Angeles, Phoenix, Portland, Seattle was fundamentally different than the growth of 19th century Eastern cities. Amid cheap Western real estate and the rise of car culture, Western urban growth was actually suburban growth.
Thus, post-war white-flight suburbanization in the East was complemented by migratory suburbanization in the West, both of which shifted power away from traditional Democratic cities. But perhaps the most problematic development for the Democrats, or certainly the most complicated, was the rise of civil rights.
At first glance it seemed that African Americans regaining their franchise in the South was a win for Democrats. But of course it wasn't that simple. Millions of blacks had left the South since the early 20th century, mostly moving to Northern cities. Meanwhile, the Southern white backlash against civil rights drove many white Southerners out of the Democratic Party, and there were not enough black Southern voters to counter them.
The Southern backlash first appeared in presidential politics. Democrats continued to dominate Southern local and state elections for decades, but almost immediately began losing white voters in the quadrennial cycle. As early as 1948, when Democratic President Harry Truman made civil rights part of his platform, White Southerners had voted for a third party segregationist: Democrat schismatic, "Dixiecrat" Strom Thurmond of South Carolina.
The third party segregationist trend continued in the 1960s with the rise of Alabama's George Wallace. But eventually it was the Republicans who adapted and began scooping up disaffected Southern whites. Richard Nixon employed his notorious Southern Strategy in 1972, courting white Southerners with barely coded racist appeals. Republicans would continue that tactic for two decades, proving that either major party was capable of profiting from racism.
But the long term electoral impact of Civil Rights stretched well beyond the South. The Civil Rights movement had helped usher in an era of protests that manifested itself in various forms ranging from the anti-Vietnam War movement, to women's rights, to Black, Red, and Chicano Power movements. And if white Southerners were alienated by black civil rights, then many white Northerners were alienated by the protest movements that followed.
Once again, Republicans pounced, offering a new law and order/tough on crime/silent majority platform that many white ethnics found attractive. With the Solid South slowly disintegrating, the biggest Northern Democratic pillar also began tilting Republican, pushed by their resentment over the protest culture, their fear of crime, and their disenchantment with cities that were increasingly dark and poor.
In 1980, Ronald Reagan's overwhelming victory against incumbent Jimmy Carter was made possible by this convergence of forces. As a former governor of California, Reagan represented the new suburban West; furthering Nixon's old Southern Strategy, he ate into the Democratic South, with Carter holding onto only his Native Georgia; and the Gipper also swept most of the Northeast by appealing to voters the press dubbed Reagan Democrats: disaffected white, urban ethnics, many of whom were now actually new suburbanites. Their patron saint was Archie Bunker.
The Reagan Revolution spawned the third iteration of the Democrat-Republican duopoly. Democrats still held the Northern urban vote, which was increasingly poor and minority. They also held most of the South in local and state elections. Meanwhile Republicans, still the party of business, began gobbling up much of suburbia by attracting several other conservative demographics: cultural (Christian) conservatives who prioritized issues like abortion, school prayer, and an opposition to feminism; social conservatives who wanted government to get tough on crime; hawkish political conservatives who rejected Nixon's detente Cold War strategies; and economic conservatives who opposed the New Deal reforms, loathed LBJ's Great Society reforms, and rejected the Keynesian economic doctrines that had dominated mid-century American politics.
Suburbia, now firmly established as the home to America's white middle class, was the new electoral battleground of the duopoly's third iteration. This posed a major problem for the Democrats. Generally speaking, suburbia was more conservative than the industrial cities of the 1st and 2nd iterations had been. Even the liberalization of immigration laws in 1965 didn't help the Democrats as much as they had hoped. Instead of replenishing cities, new immigrants were more likely to settle in suburbs, and became less reliably Democratic, although they still leaned that way.
During the 1990s, the Democrats adjusted to this new reality by moving to the right. The shift was embodied most dramatically by Bill Clinton, whose infamous politics of "triangulation" saw him repeatedly undercut Republicans by beating them to the conservative punch.
On issues like free trade, welfare reform, and mandatory sentencing, Clinton outmaneuvered the G.O.P. and dragged his party further from its progressive moorings. Meanwhile, Democrats placated their base by remaining liberal on social and cultural issues such as gun control and abortion rights. Split the difference and you might consider them the Democrats the party of the Expansive Center, or the Straddled Center. They have remained there ever since.
The 4th Iteration?
The 1st iteration of the Democratic-Republican duopoly, based on geography in the aftermath of the Civil War, lasted three-quarters of a century. The 2nd iteration, based on social and economic class in the aftermath of the Great Depression, lasted half a century. The 3rd iteration, based on a tripod of suburbanization, deindustrialization, and Civil Rights, may be waning as we speak.
If so, the we are perhaps on the verge of a 4th iteration of the Republican-Democrat duopoly, one which is based on ideology.
Today both parties are struggling to define themselves in ways that appeal to voters. Even with structural forces largely insulating the Democrats and Republicans from third party challenges, they are stumbling badly as they try to maintain their relevance. Consider the evidence.
According to a Gallup poll conducted last January, 43% of America's registered voters are Independent. Only 30% are Democrats. Barely a quarter are Republicans. This is a staggering development, a forceful rejection of both parties by a plurality of the nation's voters. These numbers are even more impressive when one considers that more than half the states run closed primaries that require party registration to vote in party primaries.
Yet because of a political system that punishes third parties at every turn, the duopoly remains. And so, even as more than four-tenths of registered voters have abandoned the two major parties, third party such as the Greens or Libertarians remain on life support. Instead of their expansion we are seeing a reshaping of the Republicans and Democrats.
As tens of millions have flee the major parties, what often remains behind is an energized, extremist base. That is not to say most Independents are necessarily moderates. Rather, as disaffected party members leave, the remaining party membership becomes more homogeneous and ideology crystalizes. And so the Republicans move further into their conservative corner while Democrats are evermore bogged down in the straddled center.
Compounding this development is the increasing social and economic segregation of Americans, which has now reached unprecedented levels. The divides between rich and poor, white and black (or brown), and religious and secular, are thoroughly mirrored by geography. For example, as wealth inequality grows, wealthy families continue to segregate themselves in exclusive enclaves, while the poor are left behind in abandoned rural and urban pockets. Meanwhile, the various shades of middle class sort themselves out in suburbia.
Likewise, racial segregation in America is now worse than it has been at any time in the nation's history, including during the eras of slavery and Jim Crow. This development, which was utterly unfathomable after the Civil Rights movement toppled Jim Crow segregation in the 1960s, results from a complicated brew of economic and social factors. The net result is that even America's ultimate multi-cultural city, New York, is made up of little more than highly segregated ethnic and economic pockets.
Patterns of ethnic, social, and economic segregation dominate modern America. And that has made it all the easier for politicians to gerrymander local, state, and Congressional electoral districts; dividing up Americans isn't quite so hard when Americans have already divided themselves.
Consequently, fewer and fewer general elections at any level are competitive. Instead, Democrats and Republicans have created a patchwork of fiefdoms for themselves across the electoral landscape. In essence, many political districts, whether federal, state, or local, have descended into effective one-party rule. That in turn means more and more politicians face no real threat in the general election. Instead, they must expend almost all their energy and resources securing the party's nomination, which effectively translates into electoral victory.
As politicians increasingly run in what amount to single-party districts, they must cater to their party's base during primary elections. And with party ranks depleting, the parties are producing more extreme platforms. Suuccessful primary politicians must cater to extreme ideologies; as time goes by and politicians emerge from such cultures, they actually believe extreme ideologies.
Amid these developments, Republicans and Democrats are likely transforming into their fourth duopolistic iteration.
For the most part, they are no longer primarily the parties of North and South, of class, or even of race and place. Echoes of all these prior iterations remain, of course. However, increasingly the elephant and the donkey are the parties of Liberal and Conservative ideology.
The Republicans are the Conservative party and the Democrats are the Liberal Party, each becoming ever more focused on its own ideological agenda.
For the Republicans, this trend first became a national issue with the rise of the Tea Party earlier in this century. Since then, far-right wingers have increasingly asserted themselves at every level of the G.O.P. For Democrats, the move to an ideological platform above all else has not been as dramatic or as fast. In part this is because the Occupy Movement (the Left's version of the Tea Party in some respects) was far less institutionally organized than the Tea Party, did not infiltrate a party to the same degree, and no longer even exists as such. Furthermore, since the Democrats have established themselves along the straddled center, their base now has to drag them back towards the center on economic issues, not away from it, and such a movement is inherently less dramatic than rambling away from the center, which is what the Republicans are doing.
Regardless, the reshaping of both parties into ideological institutions is becoming readily apparent as they muddle their way through the current presidential nomination process.
Each party's establishment is struggling to push a conventional political candidate to the fore. This is most evident with the Republicans, as conservative politicians like Marco Rubio, Ted Cruz, and Jeb Bush are confounded by amateurs who lap them in the polls, one an ultra-conservative (Ben Carson), the other a brash populist (Donald Trump).
The situation in the Democratic Party is not quite as extreme, yet the ongoing success of Bernie Sanders mirrors that of Carson and Trump on the Republican side, at lest to some degree. Sanders is a lifetime politician, unlike Carson and Trump, but he is an outsider in other ways. Not only has Sanders never been a member of the Democratic Party until his current run for its presidential nomination, but he is an avowed Socialist, far to the left of any presidential nominee the Democratic Party has ever produced.
The likeliest scenario, due to a variety of factors ranging from campaign funding to party-influenced voting procedures, is that Hillary Clinton will win the Democratic nomination and some established Republican politician like Marco Rubio will capture the G.O.P. nomination. Despite this, however, we may nevertheless be witnessing a substantial shift in the political duopoly. More and more, both major parties are using modern Conservative and Liberal ideology as the main filter for attracting voters, as older filters like geography and class begin to fail.
The Republican Party has stationed itself on the far right of virtually every conceivable issue. Meanwhile the Democratic establishment is struggling to maintain its center-right economic platform as the party's base demands a shift to the left.
As the parties continue their process of purging and ideological purification, we find that there are no more conservative Democrats, nor are there any liberal Republicans. Eve moderate Republicans are an increasingly rare breed, and moderate Democrats may not be far behind.
A politician of the duopoly's 3rd iteration, such Jeb Bush or Hillary Clinton, may very well be the next president. Nevertheless, ideologically extreme politicians are playing evermore important roles in local and state governments, as well as Congress.
It seems that perhaps the 4th major iteration of the Democratic-Republican duopoly is upon us and the guiding factor is ideology.
Akim Reinhardt's website is ThePublicProfessor.com
perceptions: protest poaching
Nick Brandt. Elephant, 2008.
Why Miyazaki’s The Wind Rises is Not Morally Repugnant
by Bill Benzon
No, I don’t think it is, morally repugnant; quite the contrary. But it IS controversial and problematic, and that’s what I want to deal with in this post. But I don’t want to come at it directly. I want to ease into it.
As some of you may have gathered, I have been trained as an academic literary critic, and academic literary criticism forswore value judgments in the mid-1950s, though surreptitious reneged on the deal in the 1980s. In consequence, overt ethical criticism is a bit strange to me. I’m not sure how to do it. This post is thus something of a trial run.
I take my remit as an ethical critic from “Literature as Equipment for Living” by the literary critic, Kenneth Burke . Using words and phrases from several definitions of the term “strategy” (in quotes in the following passage), he asserts that (p. 298):
... surely, the most highly alembicated and sophisticated work of art, arising in complex civilizations, could be considered as designed to organize and command the army of one’s thoughts and images, and to so organize them that one “imposes upon the enemy the time and place and conditions for fighting preferred by oneself.” One seeks to “direct the larger movements and operations” in one's campaign of living. One “maneuvers,” and the maneuvering is an “art.”
Given the subject matter of The Wind Rises, Burke’s military metaphors are oddly apt, but also incidental. The question he would have us put to Mizayaki’s film, then, might go something like this: For someone who is trying to make sense of the world, not as a mere object of thought, but as an arena in which they must act, what “equipment” does The Wind Rises afford them?
I note that it is one thing for the critic to answer the question for his or herself. The more important question, however, is the equipment the film affords to others. But how can any one critic answer that? I take it then that ethical criticism must necessarily be an open-ended conversation with others. In this case, I will be “conversing” with Miyazaki himself and with Inkoo Kang, a widely published film critic.
What About the Pyramids?
The Wind Rises, as you may know, is a highly fictionalized account of the early life of Jiro Horikoshi, an aeronautical engineer best known for designing the Mitsubishi A6M Zero. At the beginning of World War II the Zero was one of the finest military aircraft in the world. The film is episodic, presenting disconnected incidents in Horikisohi’s life from his childhood up through the end of World War II.
About halfway through the film, when Horikoshi is a young man employed by Mitsubishi, he and several other engineers are sent to Germany to learn about their aviation technology. While some are summoned back to Japan, and Horikoshi’s friend, Honjo, is to remain in Germany, Horikoshi is sent west, “to see the world”. If I knew something about the history of steam locomotives I might be able to identify the engine in this frame grab and thereby know where Horikoshi was in the following scene:
Regardless of his geographical locale, Miyazaki places us inside the train and Horikoshi is sitting in a compartment when he is joined by Gianni Caproni:
But Caproni isn’t really there. Just where and how he is, that’s not clear – this is an aspect of Miyazaki’s metaphysical legerdemain, which we’ll have to leave unexamined.
Caproni is an Italian aeronautical engineer who is something of a dreamtime mentor to Horikoshi. He has appeared twice before in the film, once when Horikoshi was a boy trying to figure out what he wanted to do when he grew up, and then later when, as college student, Horikoshi was helping to put out fires caused by the 1923 Kanto Earthquake. This third time Caproni takes Horikoshi into the air, as he’d done in that first dream, and tours him around a bomber he is about deliver to the government.
Horikoshi is amazed at the plane and remarks: “Japan could never build anything as grand and as beautiful at this. The country is too poor and backward.” This sentiment is something of a motif in the film; we’ve heard it before and it and recurs again. Given the devastation that Japan managed to wreak throughout the first half of the 20th century, not just in the mid-century war in the Pacific with America, this sentiment might seem self-indulgent. And it’s certainly not true of 21st century Japan.
But the film is not about 21st century Japan. It’s about Japan in the first half of the 20th century. In an interview with the Asahi Shimbun  Miyazaki remarks:
Including myself, a generation of Japanese men who grew up during a certain period have very complex feelings about World War II, and the Zero symbolizes our collective psyche. Japan went to war out of foolish arrogance, caused trouble throughout the entire East Asia, and ultimately brought destruction upon itself. […] But for all this humiliating history, the Zero represented one of the few things that we Japanese could be proud of. There were 322 Zero fighters at the start of the war. They were a truly formidable presence, and so were the pilots who flew them. […]
The majority of fanatical Zero fans in Japan today have a serious inferiority complex, which drives them to overcompensate for their lack of self-esteem by latching on to something they can be proud of. The last thing I want is for such people to zero in on Horikoshi's extraordinary genius and achievement as an outlet for their patriotism and inferiority complex. In making this film, I hope to have snatched Horikoshi back from those people. It’s not entirely clear to me just how Miyazaki thinks this film would “snatch” Horikoshi from the patriots, but let’s bracket that question; we’ve got enough to do. I simply want to register his pride in the plane itself, the pilots, and what that represented.
Moreover, we must note that Miyazaki is living in a world in which a person’s identity is bound-up with the nation-state in which one lives. The world in which we, the readers of 3 Quarks Daily, live is like that as well. What does it mean to live in a poor and backward nation, one trying desperately to play technological catch-up, as Japan was in the late 19th century and during much of the 20th century? How does it feel to be a citizen of a 2nd or 3rd rate nation? What does it do to your sense of importance?
Correlatively, what does it mean to live in a wealthy and technologically sophisticated world hegemon? Yes, I know, The Wind Rises isn’t about the United States, but that’s where I’m from. If I’m going to use this film in making sense of my world, then that’s a question the film puts to me. And it’s not the only such question, not by a long shot.
Let’s return to the film itself. Caproni is touring Horikoshi around his plane and takes him outside where they walk along the top wing. Miyazaki poses them against a glorious sky:
Caproni: “Which would you choose, a world with pyramids or a world without?”
Horikoshi: “What do you mean?”
Caproni: “Humanity has always dreamt of flying, but the dream is cursed. My aircraft are destined to become tools for slaughter and destruction.”
Horikoshi: “I know.”
Caproni: “But still, I choose a world with pyramids in it. Which world will you choose?”
Horikoshi: “I just want to create beautiful airplanes.”
At this point two things were on my mind the first time I watched the film. In the first place, Horikoshi seems to be evading the question. Yes, he wants to make airplanes, but he doesn’t quite respond to Caproni’s question about a world with pyramids.
The second thing on my mind: What’s with the pyramids? Yes, a great accomplishment, a wonder of the ancient world. So?
But in subsequent reading about the film, I found this remark in an article by Inkoo Kang :
Jiro, as he’s referred to in the film, finds such beauty in airplanes and flight that he feverishly pursues the next level of killing machines for Mitsubishi, justifying his work by comparing his planes to the pyramids. The reference to the pharaohs might allude to the fact that Mitsubishi used Chinese and Korean slave labor to build Jiro’s Zero planes. But the character never considers whether the slaves who died making those pyramids might not believe the results were worth their lives.
At this point my thoughts run in three directions: 1) It’s not Horikoshi who brings up the pyramids, as Kang says, but Caproni. 2) Nonetheless, she’s right in associating the pyramids with slave labor, even if there is recent scholarly opinion to the contrary , and through that making the connection with slave labor in building the Zeros. 3) Nonetheless, the pyramids ARE generally regarded as a remarkable human achievement.
Are we mistaken to believe that? I think not. But what are we to make of the human cost of their construction? How do we weigh that cost against the accomplishment? I’m not sure that we can. What we’ve got in the pyramids is an aweful conjunction, where I mean “awe” in the fullest sense of the word – and even if the pyramids weren’t built by slave labor, well, what about those who died constructing the Great Wall of China?
THAT’s the kind of world we live in. How is it possible to live in such a world?
Who Are They Going to Bomb?
Let’s consider another scene. It’s late in the film, near the end. Horikoshi and Honjo, his friend and colleague, are walking from the assembly building back to the office:
They’re talking about the bomber Honjo is in charge of. It needs a redesign so that it can be made lighter and the fuel tanks can be shielded from gunfire. But Honjo wasn’t allowed to undertake the redesign.
Honjo: “So without a redesign, Japan’s first advanced bomber only needs two or three hits and she’ll burn like a torch.”
Horikoshi: “And who are they going to bomb with it?”
Honjo: “China, Russia, Britain, the Netherlands, America.”
As they’re conversing we see those bombers in the air over a land that is bleeding smoke from the bombs that have landed:
Presumably they’ve been bombing China, otherwise, why would they be attacked by Chinese fighters (notice the wing markings below). And yes, these bombers do burn quickly.
Horikoshi: “Japan’ll blow up.”
Honjo: “We’re not arms merchants, we just wanna build good aircraft.”
Horikoshi” “That’s right.”
They acknowledge that their country is an aggressor nation and they fear that, in the end, Japan will collapse. But they just want to build good aircraft.
They’re compartmentalizing. They’re in denial. What choices did they have? I don’t know. I wasn’t there.
Now, at this point in the story Horikoshi has been in hiding from the secret police. Earlier he had been at a resort where he’d been friendly with an anti-Nazi German. Shortly after he returned to work the secret police showed up at the plant looking for him. Horikoshi's bosses lied and made arrangements to protect him until things blew over. One of them mentioned that several of his friends had been taken without any reasons being given.
If Horikoshi had refused to work on warplanes, what would have happened to him? There is an abstract moral sense in which he was a free agent and could have refused. Practically, though, he was living in a militaristic authoritarian state that was quite willing to coerce people to the national will, or to murder them. Some people have refused the state in such circumstances. And some haven’t.
I don’t believe that anyone who hasn’t lived that situation can really know what they would do in those circumstances. Miyazaki understands that :
“I think that both Jiro and Tatsuo Hori are greater men than I, so I can’t put myself beside them,” he says. “I’ve been very blessed to make animation for 50 years in peaceful times, while they lived in very volatile, violent times. But I think the peaceful time that we are living in is coming to an end.”
Miyazaki hasn’t had to make the kinds of decisions those men had to make. But, alas, we’re moving into a world where others have to make those kinds of decisions. In another interview Miyazaki remarks :
I am against the use of nuclear power. But when I saw the press conference with the engineers working on the [Fukushima] power plants, answering questions, I saw the same type of purity of their soul that I portrayed in Jiro Horikoshi in the film. The problems of our civilization are so difficult that we can’t only put an “X” in a circle and say “Yes” or “No.”
And those nuclear engineers are hardly the only technologists facing the kind of decision that Horikoshi faced. Such decisions are legion in the world, and engineers aren’t the only ones who have to make them.
Miyazaki’s last assertion should be familiar enough to post-structuralists: no (false) binaries. This is not a world for purists. No matter what you do, you’re going to get dirty. How then, do you live the world?
Horikoshi’s World in Mine
Back in the 1960s I faced the military draft. I was opposed to the war in Vietnam, but when the lottery was introduced in 1969, I drew number 12. I was certain to be drafted if I didn’t volunteer first. At the time I was a senior at The Johns Hopkins University and that made me very desirable to the military. I was solicited by the Marines and, I’m sure, by at least one other branch of the military. By volunteering I would have some choice in my duty assignment. But if I were drafted, I’d lose all choice.
As a practical matter, it was unlikely that the Army would assign me to combat duty, not with those four years of elite education. I might not even have gone to Vietnam. But I would have been serving in the military at a time it was fighting a war I believed to be immoral.
What was I to do? I’ve heard stories of guys who did drugs the day before their physical in hopes that they’d fail the physical without being found out. I don’t know whether that actually worked for anyone, though it might have. I knew of psychiatrists who would write a letter, for a fee, that had a good chance of failing me out. Neither of those alternatives appealed to me. I could flee to Canada; I knew a graduate student who did that. But I wasn’t going to.
I declared myself to be a conscientious objector. That meant I was a pacifist with a religious objection to killing anyone for any reason. I thought long and hard about that, but my parents supported me, and I had the backing of the Chaplain at Johns Hopkins. So I filled out the forms and sent them in to my draft board. If they turned me down, well then I’d have to decide whether to enter the service or to begin a legal process that I might not win.
Fortunately my draft board granted me C.O. status. I didn’t have to enter the military. But I did have to serve two years of civilian service of a kind that was in some way comparable to non-combatant military duty (such as medical corps). My draft board gave me a bit of trouble over that, but the Chaplain was able to get some Congressmen to write on my behalf and I ended up serving two years as an assistant to the Chaplain of Johns Hopkins University.
I had to put my life on hold for two years. I regard that as a relatively low cost I had to pay in order to honor my conscience. Back in World War II conscientious objectors, mostly from conservative Christian denominations such as the Mennonites, were more in risk of prison than I was. Would I have gone to prison as the cost of serving my conscience? I don’t know. I didn’t have to face that choice.
Had I been in Horikoshi’s situation – and, though I’m not an engineer, I do understand his dedication to engineering – what would I have done? I don’t know.
Beyond Inkoo Kang’s Objection
Here’s Inkoo Kang’s basic objection to The Wind Rises :
The Wind Rises is custom-made for postwar Japan, a nation that has yet to acknowledge, let alone apologize for, the brutality of its imperial past. Nearly 70 years after Emperor Hirohito’s surrender, the Japanese military and medical institutions’ greatest evils, like the orchestration of mass rape, the use of slave labor, and experimentation on live and conscious human beings, remain absent from school textbooks.
I think she overplays her hand (e.g. "custom-made"), but sure, if someone wants to read the film as ignoring Japan’s imperial past, they can do so.
Films are complex objects and it is easy to pick and choose incidents that are convenient for whatever case you want to make. But the right-wing nationalists Kang worries about have to ignore some things that are in the film in order to read it they way she claims the film can too easily be read. They have to ignore or misread the scene I discussed immediately above, and they have to ignore the fact that Horikoshi – in the film, I don’t know about real life – was under suspicion by the secret police. They have to ignore the fact that the film clearly shows Horikoshi, Honjo, and others going to Germany to acquire German technology.
Kang admits that some of these nationalists seem to have been unable to cut the film to their needs: “Indeed, some of his fellow citizens have already accused Miyazaki of being a ‘traitor’ and ‘anti-Japanese.’” That is, despite the fact that The Wind Rises is “custom-made” to facilitate their denial, it seems to have failed in that purpose. And it has failed despite the fact that it doesn’t come anywhere near to a full catalogue of Japanese atrocities during the war with America or in its broader imperial wars throughout the first half of the 20th century.
It seems to me that Kang is, in effect, reading the film from a transcendental point of view in which she has perfect knowledge of the world Miyazaki depicts but is also isolated from the decisions she is implicitly making about how those people should have lived their lives. Moreover she ignores much that is in the film, including Horikoshi’s marriage. To be sure, she remarks that it is a sexless one but she presents no evidence of that nor, I believe, is there any evidence to present.
On their wedding night and in deference to her illness Horikoshi is perfectly will to sleep on his own futon, but that’s not what she wants. After asserting that “It feels like the room in spinning” Naoko invites him into her bed. Is there any doubt about what was on her mind or about what happened when the lights went out? That is only one incident in their relationship. What role does that relationship play more generally in Horikoshi’s life?
What do we see when we take the WHOLE movie into account? I don’t know, but I’m working on it . In doing so we need to reflect, not only on the incidents Miyazaki offers us, but on the way that he offers them. He gives us ‘reality’ straight on. But he gives us dreams, reveries, and thoughts as well, all seamlessly arrayed before us. He even throws in a bit of film-making:
What’s that about? I don’t yet have a serious opinion. Maybe I’ll come up with one, maybe I won’t.
But it does seem to me that, not only is Miyazaki showing us a man making his way in a complex and messy world, a world which forces ugly choices on him, but he is also meditating on how it is that we order such a world in our minds. The film thus has a metaphysical character, and the nature of that metaphysics is by no means obvious. Above all the film asks us to see ourselves in Horikoshi’s world, and his world in ours. It even asks us to contemplate the role that art plays in bringing us to terms with the bottomless chaos of life.
 From “Literature as Equipment for Living”. The Philosophy of Literary Form. University of California Press: 1973, pp. 293-304. FWIW, this essay was originally published in the 30s.
 Hiroyuki Ota, “Hayao Miyakaki: Newest Ghibli film humanizes designer of fabled Zero”, Asahi Shimbun, August 4, 2013: http://ajw.asahi.com/article/cool_japan/movies/AJ201308040009
 Inkoo Kang. “The Trouble with The Wind Rises”. The Village Voice. December 11, 2013, URL: http://www.villagevoice.com/film/the-trouble-with-the-wind-rises-6440390
 That the pyramids were built by slaves is a widespread notion. But recent research suggests it might not be true: Jonathan Shaw. “Who Built the Pyramids?” Harvard Magazine. July-August 2003, URL: http://harvardmagazine.com/2003/07/who-built-the-pyramids-html
I have no expertise in this matter at all, and so cannot have a serious opinion about whether or not Egypt’s pyramids were built by slaves. Nor am I sure that I matters in thinking about Miyazaki’s film. What matters is what people believe to be the case. If they believe the pyramids were built by slaves – which is what I believed until I began working on the film – then that’s what will govern their thoughts about the film.
 Robbie Collin. “Hayao Miyazaki Interview: ‘I think the peaceful time that we are living in is coming to an end’”. The Telegraph. May 9, 2014, URL: http://www.telegraph.co.uk/culture/film/10816014/Hayao-Miyazaki-interview-I-think-the-peaceful-time-that-we-are-living-in-is-coming-to-an-end.html
 Dan Sarto. “The Hayao Miyazaki Interview”. Animation World. February 14, 2014, URL: http://www.awn.com/animationworld/the-hayao-miyazaki-interview
 I’ve written a number of posts about the film and will be writing some more. You can find them at this URL: http://new-savanna.blogspot.com/search/label/Wind%20Rises
Monday, November 09, 2015
Stop Reading Philosophy!
Conference season is drawing near for many academics. In our discipline, Philosophy, already the regional conferences are in full swing, and the American Philosophical Association will have its large Eastern Division meeting in early January. This has got us thinking about these conferences and the many papers that will be presented at them. The trouble, as we see it, is that the paper sessions are so often disappointing, and so frequently less fruitful than they otherwise might be.
It's not that the papers chosen for presentation are poorly written or intellectually inept. To the contrary, the content and even the style of the writing of the papers tends to be of very high quality. What makes conference sessions in Philosophy so frequently disappointing is that, for reasons we cannot fully grasp, the disciplinary norm still heavily favors reading one's paper to one's audience. That's right: At professional Philosophy conferences, it is most common for speakers to read to their audiences. Conference presentations tend to last 20-30 minutes; then there is often a second speaker who offers a critical comment on the first presenter's paper, and the commentary often runs for another 10-15 minutes. And sometimes there is yet a third recitation — the first presenter is given the opportunity to respond briefly to the commentator's critical remarks, and this, too, is often read from a prepared text. Then, with what time is left, the floor is open for questions from the audience. And even when a speaker elects to present her work using presentation technology, still the dominant tendency is to simply read from the projected slides.
Many Philosophy conferences run for two to three days. Imagine three full days of being read to in this way. Even under the best circumstances — with dynamic readers and exciting content — it's simply exhausting.
That philosophers should be in the habit of reading their papers out loud to each other at professional meetings strikes us as bizarre. Notice how the disciplinary norm differs when it comes to pedagogy. These days, it's almost unheard of for a professor of Philosophy to read her lectures to her students. It is far more common to speak extemporaneously from notes, which forces the instructor to devise fresh formulations and to think on her feet. After all, we are educators, and in our classes we often present to our students highly detailed and challenging ideas. And when teaching material in our own research areas, we commonly take ourselves to have no need for a prefabricated script. Moreover, as almost everyone in the profession will readily admit, the really exciting exchanges at Philosophy conferences occur in the informal setting of the conference reception, or, even more frequently, the hotel bar. Why, then, should we persist in reading to each other in the official conference sessions? Why not adopt a new practice of talking to the audience?
One clear reason presents itself immediately: Most professional Philosophy conferences are highly selective. There are many more paper submissions than program slots, and so conference organizers must choose on the basis of written papers submitted for blind review. Once a paper is selected for presentation, it makes sense to expect that the author will read it verbatim at the conference; one might even say that since the paper was selected for the conference, the paper should be presented. A related consideration follows fast on the heels of the first. As we noted above, conference presentations are frequently followed by a critical response, and, again, it makes sense that the main speaker should stick to her text in order to ensure that respondent's remarks are apt. And this calls to mind a third reason why reading might be preferred: Conference schedules are tight, and the norm of reading from a text is generally thought to be a way to keep speakers within their allotted time.
But these logistical considerations are easily countered. Speakers whose work is accepted for presentation are of course required to present the very paper that had been submitted for consideration and was selected for the program. One can present a paper without reading it. And this means that a respondent can also present, but not read, her critical comments. Additionally, it is not uncommon for speakers who read to go over their allotted presentation time. This is precisely why most conference sessions feature a session chair whose main duty is to keep time! All that is required is a good-faith effort to stay precisely on topic without actually reading from a script, while attending carefully to the clock.
Our suspicion, however, is that reading is heavily favored for a different kind of reason than the merely logistical. Philosophy is rightly a discipline that calls for a high degree of verbal precision. In many contexts, even a minute semantic lapse – a misplaced "not" or "only," for example – can yield a momentous philosophical error. Reading one's paper verbatim is thought to be a guard against imprecision, and thus a way of protecting oneself from criticism and misunderstanding.
But the need for precision hardly carries the day for the reading norm. Presenters can talk through their research with a visual aid, such as a handout, and thereby avoid the problem of misstating a critical sentence. Similarly, those whose work relies heavily on quotations from others' texts can also use handouts to ensure that the quotations are accurately rendered. It seems, then, that the requisite degree of precision can be achieved without reading.
This leads us to another consideration that may drive the reading norm: Anxiety. There is a lot of pressure to perform well at professional conferences, and this pressure is naturally punctuated when it comes to younger academics and those new to the profession. And so a reading from a text is preferred to talking from notes as a plan for avoiding the nightmare scenarios, such as freezing up, getting lost, forgetting one's point, and so on. Good talks are much better than even well-read papers, but disorganized and meandering talks are far worse than poorly-read papers.
We understand and sympathize with this point. So here is our proposal. Presenters, as a rule, should talk through their papers rather than read their texts. To do this effectively, they should practice their presentations prior to the meeting to ensure good time keeping; if necessary they may work from a handout that includes the technical bits and key quotations. But we also insist on a corresponding commitment on the part of the audience: Any presenter who talks rather than reads should be afforded a degree of charity fitting for oral communication. As a profession, we should allow one another some slack, even when talking amongst ourselves, when it comes to the verbal placement of "nots" and "onlys." And – who knows? – maybe this slight change in the way we talk to each other in professional contexts about our central research may also help us to be better at communicating complex philosophical ideas to those outside of our profession.
Robert Rauschenburg and Susan Weill. Female Figure, 1950.
Monoprint on blueprint paper.
This original print can be seen in the current show at Boston's ICA: Leap Before You Look: Black Mountain College 1933-1957
"People for them were just sand, the fertilizer of history."
~ Chernobyl interviewee VM Ivanov
For a few years, if you were on Twitter and you used the word "inconceivable" in a tweet, you would almost immediately receive an odd, unsolicited response. Hailing from the account of someone named @iaminigomontoya, it would announce "You keep using that word. I do not think it means what you think it means." Whether you were just musing to the world in general, or engaging in the vague dissatisfaction of what passes for conversation on Twitter, this Inigo Montoya fellow would be summoned, like some digital djinn, merely by invoking this one word.
Now, those of us who possessed the correct slice of pop culture knowledge immediately recognized Inigo Montoya as one of the characters of the film "The Princess Bride". Splendidly played by Mandy Patinkin, Montoya was a swashbuckling Spaniard, an expert swordsman and a drunk. Allied to the criminal mastermind Vizzini, played by Wallace Shawn, Montoya had to listen to Vizzini mumble "inconceivable" every time events in the film turned against him. Montoya was eventually exasperated enough to respond with the above phrase. Like many other quotes from the 1987 film, it is a bit of a staple, and has since been promoted to the hallowed status of meme for the Internet age.
Of course, it's fairly obvious that no human being could be so vigilant (let alone interested) in monitoring Twitter for every instance of "inconceivable" as it arises. What we have here is a bot: a few lines of code that sifts through some subset of Twitter messages, on the lookout for some pattern or other. Once the word is picked up, @iaminigomontoya does its thing. Now, and through absolutely no fault of their own, there will always be a substantial number of people not in on the joke. These unfortunates, assuming that they have just been trolled by some unreasonable fellow human being, will engage further, such as the guy who responded "Do you always begin conversations this way?"
So here we have an interesting example of contemporary digital life. In the (fairly) transparent world of Twitter, we can witness people talking to software in the belief that it is in fact other people, while the more informed among us already understand that this is not the case. Ironically, it is only thanks to the lumpy and arbitrary distribution of pop culture knowledge that we may at all have a chance to tell the difference, at least without finding ourselves involuntarily engaged in a somewhat embarassing mini-Turing Test. But these days, we pick up our street smarts where we can.
Except we rarely pay attention to the lumpy, arbitrary nature of technology, and nowhere less so than in its latest, apotheotic form: social media. It's this idea of technology as the great leveler, and this is perhaps the principal myth that we are relentlessly fed, as if we were geese on a foie gras farm. And like those geese, we never seem to get tired of the feeding. Nor is there any shortage of those queueing up to do the feeding. Just this weekend I attended a fairly abysmal conference sponsored by the Guggenheim Museum, and had to listen to what I thought were otherwise discerning minds discuss how, for example, the ability of people to participate in a real-time discussion on Twitter about the Ferguson riots made true the claim that there was no longer possible to be ‘outside' of events – or rather, that the only people who were on the ‘outside' were those who were on the receiving end of the obsolete ‘broadcast media', ie: television and radio.
This idea – that people who are passive receivers of information constitute a lesser class of citizenry than those who seek to ‘actively participate' in media – is not just problematic. In fact, let's just call it out for what it is: a barely disguised elitism. Consider the hurdles that you have to overcome to access this allegedly level landscape. You have to know what the Internet is and be able to access it; you have to know what Twitter is and be willing to use it, which is itself no mean feat; and you have to care enough about all of these things, as well as the specific phenomenon of the Ferguson riots, in order to ‘participate' in it. Only at that point are you ready to suffer the slings and arrows of your fellow discussants. Thus the resulting population that jumps through all these hoops is a deeply self-selected one. Not only are the necessary cultural and technological proficiencies required to even get to this conversation substantial, but they are inevitably accompanied by – if not simply borne out of – all the attendant structural inequalities that constitute the context of society in the first place. How many people who are subject to discriminatory policing are not online, simply because they are poor, or uneducated, or most likely, just unconnected? In order to reach a putative place of ‘no outside', one must have all the tacit and consequential social, financial and cultural resources to be able to navigate quite a lot of layers of ‘inside'.
On the other hand, those belonging to the latter group of ‘passive consumers' may be more varied than one suspects. To stay with the example of Ferguson, if I watched the riots on cable news, but did so with friends and family, or with strangers in a bar or an airport lounge, and then had a meaningful discussion, well, it's almost as if this didn't happen, since my participation can't be measured in terms of tweets or likes or what-have-yous. It's just conversation, or private contemplation, as has been the case for quite some time. But if it can't be data-mined then of what use is it? At the same time, it bears mentioning that the ‘conversation' that happens on Twitter or anywhere else in social media is by no means guaranteed to be meaningful, simply because that's where it happens. The technorati merely encourage this sort of magical thinking in order to nudge us into a form of participation that occurs much more on their platforms' terms than we might think. When was the last time you went on line seeking to have your opinion changed by someone, whether it was a friend or family member – let alone a complete stranger?
Why is this the case? There is the old (at least by Internet standards) chestnut that, in real life, no one is as happy as they pretend to be on Facebook, nor as angry as they pretend to be on Twitter. So when self-selecting populations opt into participating on a specific platform, the subtle but influential effects on the participants' behavior results in a discourse that is deeply mediated. This occurs not only as a result of the platform itself (ie, the way graphic and textual elements are constructed and arranged on screen, and how users are allowed and incentivized to participate), but also thanks to how people expect their performance to be received by others, and who those others are.
We attempt to shape our online presences to be reflections of who we think we are in the first place. To think that this will suddenly give rise to some unprecedented sort of diversity – that we will step outside of ourselves to embrace new and uncomfortable truths – is naïve. I am not talking about pleasure-seeking or hedonistic pursuits (although, given the ongoing way GamerGate has problematized the seemingly innocuous pastime of video gaming, it's increasingly difficult to say that social media is capable of treating anything as a mere hobby). Rather, I mean to counter the Pollyanna-ish stance held by many techno-pundits that somehow the arc of social media bends towards justice. It may, or it may not. Perhaps the safest thing that can be said is that it will only make us more of who we are already, for better and for worse.
This is what I mean when I claim that the qualities and consequences of technology are lumpy and arbitrary. In reality, the idea that the world is flat has only ever held true for those people with the financial and social resources to make it so. Theirs is a frictionless world. The rest of us must make do with a pale imitation of this: the world seems flat to us only because we successfully ignore vast swathes of it, and social media is an excellent tool for creating the illusion that we are not ignoring anything really important, and that in fact we are paying more attention than ever before. Who can point fingers and say you're not concerned about social injustice when you've clearly been expressing your outrage by liking, sharing and hashtagging all over the damn place? Which is to say, to your friends and friends of friends and perhaps a few other random passers-by who, by definition, must be on the same platform as you. It is this lumpiness and arbitrariness that is really worth our attention.
On the face of it, an innocuous Twitter bot like @iaminigomontoya doesn't seem to have anything in common with the grand hypothesis that social media, as it is currently constituted, may not be doing us any great favors. But it will indeed take us to the next stage of the argument. I claimed above that social media is the apotheotic form of technology. Aside from being awfully pretentious, this claim is almost certainly already false, in the sense that social media is being augmented and perhaps gradually supplanted by the emergence of artificial intelligence; agents of varying autonomy, veracity and interactivity; and robots of many stripes. But since every stage of technological evolution builds upon already existing infrastructure, social media is where much of this change is manifesting itself.
More importantly, this is happening not just because all this stuff is new and clever, but because we want to talk to anything we possibly can, and we fervently desire for those things to talk back to us. This has already been amply proven by our proclivities to talk to dogs, cats and houseplants. But talking to technology is going to bring matters to a completely different level, because what is unique to technology is its ability to create massive, long-lived feedback loops that are initiated and sustained by our talk.
Here are a few examples of the things that we are building that are designed to talk to us. In addition to @iaminigomontoya, there are many such bots on Twitter, which, due to its restrictive 140-character format, is fertile ground for such experimentation. There are bots that, like our friend, will blithely reply to tweets or insert themselves into conversations, but in order to correct your grammatical and homophonic misdemeanors ("your" vs "you're"; "sneak peek" vs "sneak peak"). There are more aspirational creations as well. One of my favorites is @pentametron, which appropriates tweets that, usually quite unintentionally, happen to have been written in perfect iambic pentameter. @pentametron goes the extra mile, though, and re-assembles the tweets into Shakespearean sonnet form, the results of which can be savored here.
Of course, it's reasonable to argue that these bots are really no different than a wind-up toy. Even if you don't know precisely how it works, you know how to set it in motion, and once you've done so you get your hit of childlike wonder and then you put it down and go on with the rest of your day. But however simple, charming and/or irritating as they may be on their own, when taken as a phenomenon, these bots point to a shift that has already been under way for some time. People are, to one degree or another, not just content to interact with machines in a purposive way, but they are expecting to do so, and their expectations are increasingly open-ended. Sometimes they know they terms of the conversation – that is, that they are conversing with a constructed or artificial subject. And sometimes they do not. The truth is, software doesn't even have to pretend to be human for people to seek out human-like interactions with it. It turns out that willing suspension of disbelief is not just a literary device. As Coleridge defined it, "human interest and a semblance of truth" are all that is required to bring it about.
So what happens when we take our credulous nature and jam it into the lumpy and arbitrary distribution and consequences of technology in general, and social media in particular? In next month's post, I will propose that thinking about the intersection of these two tendencies can give us the opportunity to better envision scenarios of likely technological and social futures. It helps us to avoid the sensationalistic fallacy of a Terminator- or Matrix-style dystopia, where strong AIs destroy our way of life, if not the entire planet. Rather, it is about coming to terms with what is already among us, and of how we are already deeply entangled with it. It may even suggest how we might best adapt ourselves to a world that is perhaps already aswarm with artificial subjects that are inscrutable if not nearly invisible, so accustomed have we become to their presence.
"Inconceivable!" I hear you protest. Of course, Inigo Montoya is all too happy to ask if you know what that word really means.
Frost Falls (霜降)
by Leanne Ogasawara
The history of the Japanese calendar stretches very far back into Japanese history- so far back, indeed, that we find ourselves in ancient China.
As was true of many facets of the ancient Chinese civilization--from its writing system to ceramics and medicine-- the Chinese calendar was remarkably advanced and far superior to anything held by its neighbors of the time.
An ancient lunar-solar calendar, it had months based on the phases of the moon (each month began with the new moon), and the seasons were kept track of by observing the movement of the sun against 24 solar points, called “the twenty-four sekki” (24節気). Using these 24 sekki as a meteorological guide, important seasonal marking points-- such as the solstices and equinoxes-- could be accurately understood so that additional months could thereby be inserted when necessary.
It was high technology in the ancient world and the calendar was adopted throughout East Asia--from Japan and Korea to tropical Vietnam, the same calendar was utilized so that the time of "Frost Falling" or "Big Snow" was observed, whether the people in those lands had ever seen snow or not! In Japan, in particular, the calendar has infused the seasons with poetry and shared meaning. I don't know if it's because of the poetry inherent in the names themselves or the pageantry of images and festivals that are embedded in the calendar but it is a way of looking at the world that is deeply affecting.
Sure, anyone can step outside and appreciate the great splendor or nature; anyone with eyes to see and a heart to feel can be moved by “scattering flowers and fallen leaves” (飛花落葉) and yet.... as with all festivals, somehow the most moving aspect of aspect of things is in the shared details. I always loved the way we lived according to the calendar out in the country in Japan. It is something I miss terribly.
This was really brought to mind for me yesterday when a friend, Patrick Donnelly posted this video below on Facebook of a deer stepping up toward the altar of a Catholic church in France. Other friends said it was a church in Canada, which seems more likely maybe, but the video that he linked to was labelled, "France, the Church of St. Eustace."
My friend quickly pointed out what Wikipedia has to say about Saint Eustace:
"According to legend, prior to his conversion to Christianity, Eustace was a Roman general named Placidus, who served the emperor Trajan. While hunting a stag in Tivoli near Rome, Placidus saw a vision of a crucifix lodged between the stag's antlers. He was immediately converted, had himself and his family baptized, and changed his name to Eustace (Greek: Ευστάθιος (Eustáthios), 'well stable,' or Ευστάχιος (Eustáchios), 'fruitful/rich grain').
It is a wonderful image, isn't it? Not unlike Emperor Constantine's dream that would changed the world, the story of Saint Eustace was part of the Golden Legend and its scenes, especially of Eustace kneeling before the stag, would become a popular subject of medieval religious art. Somehow it seems so wonderful to have this other-worldly stag wandering into the cathedral which derives its name from a saint who himself knelt before a stag.
Patrick's friend says it best:
I love when life does this. It feels like magic, or being inside a poem. Myths arranged by circumstance, to fall together in a meaningful way.
Myths can be arranged by circumstance like a winning hand of cards --to come together in meaningful ways and for me, this all was especially enchanting since it happened to have occurred during the time of Frost Falls (霜降)--the time traditionally associated with deer according to the Japanese calendar.
The time when Frost Begins to Fall (Approx. October 24th-Nov, 8th): Late October to early November is the time of year known for dewdrops turning to frost, and thereby announcing the approach of winter:
Awake at dawn
for this autumn parting
-dewdrops on my sleeves-
Soon it will be frost falls
and then winter will be here
The above poem is taken from the classic anthology the Shinkokin-shu and was written on the first day of winter. In the poem, we find a poetess melancholy at the departure of autumn, as well as that of her lover whom she is seeing off on his way home. Anticipating the coming cold of winter- and of nights alone without him- she finds dewdrops (sad tears) on her sleeves.
The imagery of the dewdrops of mid-Autumn which, along with the moon, basically dominates the prior season is continued on with the frost through the Beginning of Winter. Frost was seen as another pleasing metaphor for mutability or evanescence, often used interchangeably with dewdrops; and the frost of early winter not only gives its name to the period from approximately the 24th of October until November 8th, but in fact, one of the traditional names in Japan for the Eleventh Month has been the Month of Frost (霜月).
The Month of Frost is known for the splendor of the changing leaves. This time of year is so beautiful in Japan.The ancients wondered what could have caused such moving beauty? The autumn leaves are known as momiji and the etymology of the word momiji, can be traced back to the belief that the autumn colors were somehow “rubbed out” (momidasu 揉み出す) of the leaves at that time of year. Perhaps it was the the gem-like dewdrops or the icy frost that affected this transformation, they wondered? Or maybe it was the long autumn rains that gradually seeped into the trees at night, staining the leaves all those vivid colors...
Another explanation-- my personal preference-- was that it wasn’t the dew at all which was to blame, but rather the tears of passing geese:
Might it not be that
the dewdrops forming on Autumn nights
are only just that- dewdrops
And that it’s the tears of passing geese
which stain the fields red
- Mibu no Tademine
In addition to frost and dew, the autumn leaves are also associated with deer; which are often pictured playfully in paintings and on ceramics posed among the autumn trees. This association comes from the following very famous kokinshu poem:
fallen leaves under foot
and the sound, the sound
of a calling deer-
How sad is Autumn
Deer, as well as boar and antelope are a native game animal of Japan, and long ago the meat of all three creatures was referred to collectively as “shishi.” As an important source of meat and therefore life, as well as for its destructive power over unguarded fields and crops, deer were venerated in a way analogous to the pig in ancient China as a respected creature which could be very useful to humans. Indeed, not only was the meat consumed, but its bones, coat, ears and antlers were all utilized by ancient people. It was written in the Kojiki (712) that deer served as attendants of certain gods, transporting them across the ocean much as Garuda is thought to function as the mount of the God Vishnu in Hindu mythology. The bones of their shoulder blades were used in divination whereby the cracks or marks left on the bones when burnt were “read” allowing the ancients to know their future good or bad fortune. Also, influenced by Chinese thought, along with the crane and turtle for example, the deer has long been associated with longevity and good luck and their ground antler was an important- albeit highly expensive- medicine used in the pharmacopoeias of East Asia in the treatment of all sorts of ailments.
From its sacred or totem-like beginnings, by the early Heian Period deer had become one of the principal symbols of Autumn:
In a mountain village
as Autumn deepens
so too does my loneliness-
and opening my eyes,
is the sad sound of a calling deer
From early Autumn well into late October is the deer mating season, and so their constant calling could be heard by the Heian Period poets echoing in the hills around the ancient capital evoking a feeling of cold loneliness already associated with this time of year- perhaps even more heart rending than the crying insects of mid-autumn.
And like the crying of the insects, the calling of deer was thought of as an apt metaphor for love-less, lonely Autumn nights: an audible sigh of loneliness coming from the hills.....
Top image: Pisanello's Vision of St. Eustace
Monday, November 02, 2015
The Oldest Evidence of Life on Earth
by Paul Braterman
It looks as if life on Earth just got older, and probably easier. Tiny scraps of carbon have been found inside 4.1 billion year old zircons, and examination shows that this carbon is most probably the result of biological activity. This beats the previous age record by 300 million years, and brings the known age of life on Earth that much closer to the age of Earth itself. The implication is that life can originate fairly quickly (on the geological timescale) when the conditions are right, increasing the probability that it will have originated many times at different places in our Universe.
The Solar System, it is now thought, formed when the shockwave from a nearby supernova explosion triggered a local increase in density in the interstellar gas cloud. This cloud was roughly three quarters hydrogen and one quarter helium, all left over from the Big Bang some 9 billion years earlier. It had already been seeded with heavier elements produced by red giant stars, to which was now added debris from the supernova, including both long-lived and short-lived radioactive elements. Once the cloud had achieved a high enough local density, it was bound to fall inwards under its own gravity, heating up as it did so. The central region of the cloud would eventually become hot enough and dense enough to allow the fusion of hydrogen to helium. A star was born.
The heavy elements (and in this context “heavy” means anything heavier than hydrogen and helium) in the dust cloud surrounding the nascent Sun gave rise to the rocky cores hidden within the outer giants Jupiter, Saturn, Neptune and Uranus, of the outer reaches of the Solar System, and to the rocky inner planets, Mercury, Venus, Mars, and, of course, to Earth and everything upon it. We are stardust.
The asteroids are made out of material that was never able to come together to form a planet, because of the competing gravitational pull of Jupiter. Asteroids are continually bumping into each other, scattering fragments, and some of these fragments fall to earth as meteorites. The Hubble Telescope has given usimages of star and planet formation in progress. Such is our modern creation myth, magnificent in scale, and rooted in reality.
The oldest solid objects in the Solar System are calcium-aluminium rich grains, the most refractory of all the materials to condense out of the gas cloud. These are now known from a refined form of uranium-lead dating  to have formed as much as 4,568.2 million years ago, give or take a very few hundred thousand years either way, and that is now the accepted best estimate for the Solar System’s age. A remarkable feat, to fix this to within around 1% of 1%. As time went by, and the outer regions of the gas cloud radiated away their energy, more materials condensed out, and the grains grew and stuck together by contact and eventually by their own gravity. Thus we went from grains to pebbles to larger objects to planetesimals and eventually to the planets as we know them. The final stages were marked by increasingly violent collisions, culminating in the collision between the proto-Earth and a Mars-size object that gave rise to the present Earth-Moon system, and rounded off by what has been called the Late Heavy Bombardment .
The energy of the collisiodensens will have caused melting, even before the formation of full-scale planetesimals, and the separation of the molten bodies into a metal-rich (mainly iron) core, and a less , oxygen-rich outer mantle. It is Earth’s iron core that is responsible for its magnetic field, and this field in turn shields us from the constant bombardment of charged particles emanating from the Sun, which would otherwise have stripped away our atmosphere. Elements like platinum and gold (so-called siderophiles, or iron-lovers) concentrated in the core, which is one reason why they are so rare at the surface, while elements such as oxygen, calcium, magnesium, aluminium and silicon are lithophiles, or rock-lovers, and concentrate in the mantle. Fortunately, the highest melting point rocks, which are thus the first to solidify, are less dense than average, which is why Earth has a solid crust floating on the surface of the mantle. The precious metals are all much stronger siderophiles than iron itself, which forms a strong bond with oxygen and is one of the most common elements in the crust and mantle, as well as being the main constituent of the core. The Late Heavy Bombardment explains the craters on Mercury, the Moon, and Mars. No such craters survive on Earth, but that is because weathering and plate tectonics have completely reworked the surface.
We can learn a lot about the history of these processes from the distribution of the different elements, and even of individual isotopes, especially radioactive isotopes and their decay products. For example, hafnium-182 is radioactive, with a half life of slightly under 9 million years, decaying to tungsten-182. Hafnium is a lithophile, and tungsten a siderophile. So if core formation is slow on this timescale, most of the hafnium-182 from the supernova debris will have had time to decay to tungsten, and will vanish into the core. But if core formation is relatively fast, the hafnium-182 will remain in the rocky phase, where the tungsten-182 derived from it will end up stranded.
We can also sometimes learn about how a material was formed by looking at the ratio of different non-radioactive isotopes. Almost all elements occur as more than one isotope, with the same number of protons and electrons, but different numbers of neutrons. You may well have been told at school that isotopes, despite their have different masses, have identical chemistry, but this is not quite true. Generally speaking, because of quantum mechanical effects , different isotopes have very slightly different chemistries, and small deviations in their relative abundance provides clues to a sample’s history.
Using many detailed arguments of this kind, we come up with the following sequence:
- Beginning of solar system, 4,568 million years ago (see above)
- Collisions between planetary embryos, and partial melting of resulting meteorites, within a very few million years of that beginning
- Accretion of Earth under way within 10 million years of beginning
- Earth-Moon system formed, between 30 and 100 million years from the beginning. Formation of the Earth’s liquid core would be complete at this stage, although the formation of the solid inner core is remarkably recent by comparison (around 1,000 to 1,500 million years ago)
- Oldest rocks on moon 4,460 million years old, (dating Moon’s oldest crust to within a very few tens of millions of years after its formation)
- Oldest rocks on Earth, 3960 million years old, with evidence for an older (4000 to 4200 year old) component
- Late Heavy Bombardment, around 3,900 million years ago, as estimated by dating craters on the Moon.
It was at one time assumed that the Late Heavy Bombardment would have heated the Earth’s surface sufficiently to destroy any life forms in existence at that time. But careful estimates of the total heating effect show that this is not the case, even at the surface, while bacteria obtaining their energy from reactions involving minerals have been found 2.8 kilometers below the surface.
The Jack Hills of Western Australia are of enormous interest to geologists. The rocks that they are made of are thought to have been originally laid down some 3,600 million years ago, as deposits from river deltas, although they have undergone many episodes of transformation since then. They are of special interest because the delta deposits contained zircons that were already, at that time, hundreds of millions of years old; tough grains of impure zirconium silicate from the already ancient mountains, eroded out by the streams that fed the deltas, and transported and buried there unchanged. These have inspired a truly heroic effort from geologists; one paper, in its title, refers to “The first 100,000 grains.” Two separate research groups have reported that the oldest zircons found there, dating back to 4,400 million and 4,300 million years ago, show evidence for the presence on the planet of liquid water , which is generally regarded as a necessary condition for the emergence of life.
Necessary, but not sufficient.
We turn now to the oldest evidence for life on earth.
Hard fossils of complex organisms first appear in abundance around 545 million years ago, although we can stretch this back to 600 million years if we include fossilised traces, such as burrows (here, Ch. 7). If we want to go back much further, we will be relying on evidence from single-celled organisms, which is always less clear-cut and more open to alternative explanations. However, such organisms can form mats, with a characteristic texture that develops from horizontal layers of dead organisms, with trapped soil particles between them. This leads to the development of what are known as stromatolites, domed multi-layered structures that persist to the present day. Modern stromatolites, at least, are quite complex communities of cyanobacteria, single celled organisms capable of photosynthesis, with different kinds of bacteria, using different wavelengths of light, found at successive levels. Stromatolites are found throughout the fossil record; they were at their most abundant some 1,500 million years ago, but are now found mainly in highly saline lagoons, where grazing creatures, which disturb their formation, cannot survive. The oldest fossil stromatolites are found embedded in 3,430 million year old chert (silica rock), and if we make the reasonable assumption that continuity of form represents continuity of kind of organism, it follows that diverse communities of photosynthesising bacteria were already in existence at that time.
There are claims of microfossils of chains of bacteria, going back to 3,600 million years ago, but these are little more than dark smudges embedded in chert, and their interpretation remains controversial. Moreover, rocks of this age or older have all undergone considerable change, having been subjected at various times to great pressure and high temperatures. To go back further, we have to resort to more indirect kinds of evidence.
Carbon occurs on Earth as a mixture of two main isotopes, carbon-12 (99%) and carbon-13 (1%). There are also traces of carbon-14, used in radiocarbon dating, but this has a half life of only some 5700 years and apart from contamination is effectively absent from materials over a million years old . It has been known since 1939 that the isotopic composition of carbon in plants is different from that found in the carbon dioxide from which it is derived; plant carbon, and materials derived from it, are “light”, meaning that they have a measurably smaller proportion of carbon-13. This is as expected  from quantum mechanics, which predicts that common dioxide containing carbon-12 will be slightly more chemically reactive than that containing carbon-13. The excess of carbon-12 is, of course, inherited by all materials derived from plants, such as animals (which eat them), and fossil fuels. Indeed, one of the many ways in which we know that the recent unprecedented rapid increase in atmospheric carbon dioxide is the result of our burning fossil fuels, is the increasing proportion of carbon-12 in atmospheric carbon dioxide over time.
In 1995, I had the privilege of visiting the laboratories of Gustaf Arrhenius at the Scripps Institution of Oceanography, La Jolla. There I met a Ph.D. student, Steve Mojzsis, who has gone on to pursue a distinguished career in isotope geochemistry. Steve is now Professor at the University of Colorado at Boulder, and his research group was responsible for several of the findings described above. As his Ph.D. problem, Steve was examining 3,800 million year old sediments from Greenland, which were known to contain carbon slightly, but perhaps not conclusively, lighter than expected. Within these rocks, he found grains of hydroxyapatite, which is a very tough form of calcium phosphate, essentially the same as the material your teeth are made of. And within these grains were tiny granules of carbon.
What happened next was made possible by advances in scientific instrumentation, and specifically in the development of what is known as iron microprobe mass spectrometry (more fully, Sensitive High Resolution Ion Microprobe or SHRIMP). This is just what the name implies. A beam of charged particles (ions) is accelerated and focused, and used to drill away at an area of the sample a hair’s breadth across. The fragments blasted out by this process are then fed into a mass spectrometer, which sorts out the different isotopes. When the carbon granules were examined in this way, they were found to be within the range expected for organic material arising by photosynthesis. So these granules were biological in origin, and the earlier inconclusive results were the result of averaging out organic and inorganic material.
Science does not provide proofs, at least not in the sense that mathematics provides proofs, and there are alternative non-biological routes to light carbon. But these involve reactive metals that would not have been present in the crust after core formation, and in any case such processes would not account for the segregation of the light carbon within granules. And so, while scientific conclusions are always in principle subject to being overturned by new evidence, my own view is that it would be unreasonable to deny this evidence for life 3,800 million years before the present.
Steve’s record stood for 20 years, but has just been spectacularly broken, as a result of the zircon screening that I mentioned earlier. Some of the oldest zircons contain flecks of carbon, visible under the microscope. One of these was selected for special examination, cut open, and the carbon examined. Radiometric dating of the freshly cut zircon surface gave a date of 4,100 million years old, while the carbon itself turned out to be light, in the range expected for what had once been living material, with the carbon having been derived from carbon dioxide by photosynthesis. Thus we can now say, with a surprising degree of confidence, that there was life on Earth, and indeed life capable of carrying out the complicated sequence of reactions necessary for photosynthesis, 4,100 million years ago.
So what does this tell us? Are we all descended from the life forms in existence at that time? Almost certainly yes. The alternative would be a far more complicated story, with life having arisen more than once. It follows that the life from which we are all descended was present on Earth within 350 million years of the formation of the Earth-Moon system, and within an even shorter time after Earth had developed a solid crust, cool enough for liquid water (a prerequisite of our form of life).
In 1981, Francis Crick wrote that “we can only say that we cannot decide whether the origin of life on earth was an extremely unlikely event or almost a certainty - or any possibility in between these two extremes.” Now, at last, we can go beyond this. If the origin of life was unlikely, then life originating so early would be even more unlikely. So while it may be putting it too strongly to say that its emergence was “almost a certainty”, we can say that it was certainly a reasonable possibility. And if it was a reasonable possibility here on Earth, then it must equally be a reasonable possibility on all the Earth-like planets we have discovered, whose number grows almost daily.
To quote Steve’s comment on these discoveries, “This is what transformative science is all about. If life is responsible for these signatures, it arrives fast and early.”
1] Technically speaking, lead-lead dating. This depends on the ratio of lead-206 (formed by decay of uranium-238) to lead-207 (formed from uranium-235), with non-radiogenic lead-204 as a measure of lead from other sources. The calculation depends on the known difference in half life between the parent uranium isotopes. We know that these half lives must have been constant, since they are not free variables but consequences of the more fundamental constants of nature, and had these been different then the meteorites would not have formed as they did in the first place.
2] One problem with this scenario is the extreme similarity in composition between Earth and Moon rocks, difficult to explain if they are derived from two separate parent bodies. See, however, here.
3] As a consequence of the uncertainty principle, all materials store an unremovable amount of what is called “zero point vibrational energy”, and the amount of this energy is proportional to vibrational frequency. Lighter isotopes are therefore associated with higher zero point energies, leading in general to slightly higher chemical reactivity.
4] The amount of the minor isotope oxygen-18 present in these samples is different from the bulk of the mantle from which they crystallised, and indicative of mantle formed from the remelting of crust that had exchanged oxygen-18 with liquid water.
5] There is a steady trickle of claims from Young Earth creationists to have detected carbon-14 in dinosaur bones, diamonds, and coal. The first two of these are explained by contamination, while the more interesting case of coal is associated with nuclear reactions involving other radioactive atoms trapped within the material.
General references hyperlinked in the usual way. Selected more technical references (some behind paywall but all with open abstracts): Potentially biogenic carbon preserved in a 4.1 Byo Zircon here; Solar system age here; Earth’s accretion here, here, and here; Moon formation here; late origin of earth’s inner core here; zircon mass screening here, Earth’s oldest surviving crust here, here and here; 4.4 Byo zircon here, and the existence of water on earth when oldest zircons formed, here and here; Habitability of Hadean Earth here; 3.43 Byo stromatolites here; Biological carbon isotope effect here; previous oldest evidence for life here. Image of zircon with granules via ibtimes. Stromatolite image from Britannica.
Death by Elephant
It is one of my life regrets that when in Delhi, I did not take the time to go down and see the Taj Mahal. This is not even the worst travel regret I have either. But it is the second worst. There was so much to see and do in Delhi back then. And I guess I tend toward a pathological dislike for the popular and fashionable. So, I missed seeing the building with my own eyes.
Filled with regret, I sat down at LA's Geffen Playhouse last week to watch Rajiv Joseph's Guards at the Taj.
The play opens as two friends are standing guard in front of the almost completed Taj Mahal. Childhood friends, they cannot keep to the strict rule of silence that their job demands. Surreptitiously, they talk of the stars and their dream of "moving up" to become guards in the emperor's harem... the ultimate job, they decide. Birds are singing. The beauty of their friendship and funny dialog, however, belies the extreme violence that follows in Act 2.
It is an old legend that after having the Taj built as a monument to his beloved dead wife, the emperor Jahan decreed that the architect and all the workers who had built the building would all have their hands cut off. When I was in India, I had actually heard that it was only the architect who was put to death. In any case, it is just a dark legend. Anyway, as the two friends stand guard happily dreaming of the emperor's harem, one tells the other about a rumor that is going around. The emperor, it seems, in his desire to ensure that nothing more beautiful than his glorious Taj ever be built again, will amputate all 20,000 workers' hands.
One friend says, "What a horrible job that would be to cut off the hands of 20,000 men."
"Yeah" says the other, "that's 40,000 hands."
In that moment, it then dawns on them that of course this is a job that will fall to themselves--as the lowliest grunts in the army.
And sure enough in Act 2, the stage is awash in blood and severed hands. (My friend Guita called it an early Halloween).
Despite the contemporary language and jokes, there is an element of classical Greek tragedy to the story--for neither character evolves or transcends anything. Rather, they both do what the situation leads them to do with their "characters becoming their destiny."
Two friends, Babur and Humayun. They do what the emperor orders, because if they don't, they will be trampled by elephants. And so they remove 40,000 hands that day.
The dreadful act complete, they each unravel. Humayun takes refuge in the fact that he was "doing his duty." He wants a better life and if he had not done it, he would have been put to death. For everyone knows that in order for those in power to live lives of luxury, the common people must suffer in poverty and chains. This is just the way the cookie crumbles. Babur, though, cannot get past the idea that the emperor is such a megalomaniac that he seeks to kill Beauty itself. Babur then becomes almost obsessed by what it means to kill beauty and seeing that for the monstrosity it is, he decides to kill the emperor, which, of course, would save beauty.
To kill beauty. Is it even possible?
Well, we know that the more power is held in fewer hands the more despotic things become--even to the seemingly impossible notion of controlling beauty. Think of the lengths that Hitler and Stalin and Mao took to control aesthetic sensibilities and to steal and destroy art. Think of the Looty wallahs. Or think of how the consumer market today is turning art into a product and how "art became irrelevant?" My friend Hakha said that Ivan the Terrible was alleged to have done the same thing to the artisans who built St Basil's Cathedral.
One of my favorite novels, Rushdie's Enchantress of Florence sees a similar exploration of beauty versus power in the court of the emperor Akbar (from Babur to Humayun; from Hamayun to Akbar... then on to Shah Jahan, did it not go from bad to worse?)
But then again has there (or has there ever been) a system that does not chew people up and spit them out?
The subjugated subject is not even aware of its subjugation
German philosopher Byung-Chul Han, in an absolutely brilliant short essay, Why Revolution is No Longer Possible, might suggest we are no better off today along these lines than we were in the Mughal courts of Hindustan. Only that today, rather than employing covert and violent means, domination proceeds from market seduction and the disciplining state of Foucault. He says:
'The neoliberal system of domination has a wholly different structure. Now, system-preserving power no longer works through repression, but through seduction — that is, it leads us astray. It is no longer visible, as was the case under the regime of discipline. Now, there is no longer a concrete opponent, no enemy suppressing freedom that one might resist.
Neoliberalism turns the oppressed worker into a free contractor, an entrepreneur of the self. Today, everyone is a self-exploiting worker in their own enterprise. Every individual is master and slave in one. This also means that class struggle has become an internal struggle with oneself. Today, anyone who fails to succeed blames themselves and feels ashamed. People see themselves, not society, as the problem.'
As Heidegger predicted, everything is being commoditized-- even beauty and art. Even one's own self.
Han is correct, I think, that --as in South Korea-- a vast consensus prevails in the US (and this is accompanied by epidemic anxiety and depression). When I try to imagine what places I have been to that somehow resist (even if just a little) the globalized neoliberal system of domination, in the way Han describes, I think I found it a bit in both Japan and France. Yes, even in Japan Inc. I think it is still possible in both this countries to not have to "get with the program." For example, in both places, agriculture still is not corporatized and one doesn't have to join the system for health care or education. Efficiency and short term quarterly financial performance are not the highest goods there either. As another friend points out, "it ain't all bad; there's Etsy and Kickstarter."
To be sure, but I guess the bottom line is to ask whether Han is correct or not: is it true that revolution is no longer possible? That is, is it death by elephant for us all?
I adore Salmon Rushdie. In my opinion, no other living author is more deserving of a Nobel Price for literature than the great Rushdie. Right now, I am in the middle of his latest, Two Years Eight Months and Twenty Eight Nights. It is absolutely wonderful--and one of the themes in the book is the philosophical battle between the medieval philosophers, Ghazali and Ibn Rushd. In fact, such was Rushdie's father's great admiration for the latter that he took a surname which paid homage to Ibn Rushd.
In this fantastic interview here, Rushdie describes his interest in the philosopher:
I’ve always been fascinated by him (as I was by Machiavelli, who became a character in my novel “The Enchantress of Florence”), both because of my father’s interest in him, which led him to derive our family name from his—my father wanted a modern, permanent surname, unlike the name-and-patronymic format, which was traditional—and because of Ibn Rushd’s rationalism. My father admired Ibn Rushd for his attempt to reconcile reason with religion, though he himself was not religious; and he bequeathed that admiration to me.
Known in the West as Averroes, Ibn Rushd was the great Medieval defender of Aristotelian rationality and argued for its synthesis with religion.When I was young, by the way, I was similarly intrigued by the tension between Averroes and Avicenna (as presented by Henry Corbin)-- and I think this is a far better juxtaposition to examine. But either way, this urge to unify and synthesize opposites was one of the great hallmarks of the Middle Ages--both East and West. And as Rushdie sees it, this is on one level continuing on in our modern struggles between reason, logic and science on the one hand (Averroes greatest goods) and fundamentalism and spirituality on the other (Ghazali and Avicenna).
Sadly, the will to harmonize and unify ideas (this truly medieval enterprise) has been replaced by a modern tendency for polarization and dichotemy.
Rushdie is interesting; for instead of being decidedly on one side or the other (and obviously if he was made to choose, he would side with that of reason),his story rather illuminates the need for both sides. I loved in his book, Enchantress of Florence, when he had the Shah Jahan has imagining paradise as a place where people were allowed to argue without consequences. Worship as debate, he said elsewhere. Or "Paradise is a place whee religion and agument mean the same thing." In the above interview, he explains this thus:
The story’s other battle is between the world of imagination (dreams, fantasy) and that of reason and science, and that the two should fall in love seemed, well, beautiful. I was also thinking of Goya’s marvellous etching “The Sleep of Reason Brings Forth Monsters,” to which he attached the caption “Fantasy abandoned by reason produces impossible monsters: united with her, she is the mother of the arts and the origin of their marvels.”
I love that. For in the same way that fantasy and the marvels turn into monsters without reason, so too does pure reason become a monster of sorts as well. In the play, both Humayan and Babur appreciate beauty; both lose themselves in dreams. But while Babar overreaches for the beautiful stars losing everything, Humayan cannot imagine a new heirarchy that defies reason and loses everything as well. The two men cannot reconcile. The logic of their fate is only as tragic as our own....
Sughra Raza. Self Portrait at Whispering Bayou, Houston, 2015.
"The Contemporary Arts Museum Houston presents Whispering Bayou, an immersive multi-media installation that consists of a video triptych and a multi-channel soundscape composed of the sounds, voices, and images of Houstonians and their city. The project is a collaboration between Houston-based filmmaker, interactive multimedia producer, and community activist Carroll Parrott Blue; French composer and multimedia artist Jean-Baptiste Barrière; and New York-based composer and computer interactive artist George Lewis."
The Thrill is Gone: Six Months with an Apple Watch
by Carol A Westbrook
I love my Apple Watch...or I used to love it. Now, I'm not so sure.
I was thrilled when I got a new Apple Watch shortly after their initial release by Apple in April of this year, a birthday gift from my husband. I enjoyed the Watch so much so much that I bought him one, too, and have recommended them to all my friends. I loved my new Apple Watch.
But now, I find myself looking at other watches. Sometimes I even wear one of my favorite "traditional" watches. Yes, at times I miss my beautiful, elegant, reliable old timepieces.
Don't get me wrong. I love the way I can use my Watch to check the headlines, get the current temp or weather forecast, check emails and messages, and see if I made my daily activity goal, all with a quick glance and a touch. I can ask Siri a question, find a restaurant with Yelp and get directions on a small map without getting out my phone. Best of all, I loved the way I could "tap" my husband's Apple Watch or send him a quick message. Yes, I love my Apple Watch.
But... do I really like it?
From the start I enjoyed the attention I would get when I raised my wrist to check the time, and the screen would illuminate. Or even better, a call would ring and I would answer it by speaking into my wristwatch, like Dick Tracy used to do in the Sunday comics. Those of us of a certain age dreamed of owning a wrist-radio, but never thought it would happen in our lifetime!
"Wow, " people would marvel, "is that a new Apple Watch?"
Now that these watches have been around for six months, they are no longer a novelty. The thrill of being a first-adopter has worn off, and my watch no longer gets much attention. As a matter of fact, my entry-level black "sport" Apple Watch ($349.00) looks surprisingly like an inexpensive Black Rebel Swatch ($70.00), as you can see by these pictures, Swatch on the left, Apple Watch on the right.
Sure, I could have gotten a pricier and more stylish Apple Watch, with a stainless case and band, but at $600 to $1000, or even up to $10,000+ it would have been hard to justify. It's not a Rolex, after all.
Of course there are other reasons I'm less than infatuated now. I looked back over the notes I made, to better understand how I felt when I first got the watch. I carefully documented my impressions, including my experiences during the three weeks that it took for me to master it well enough for everyday use.
And therein lies the rub...it took me three weeks to figure it out.
Frankly, I was surprised that the Watch was so complicated. Apple's philosophy has always been to keep their products consumer-friendly, with as much "out-of-the-box" ease of use as is possible for an innovative design. Not so for the Apple Watch, which is even more complicated than an iPhone.
From the time you put it on your wrist, the Watch has so many novel, customizable features that require decisions, before you even know what you might want. It is somehow fitting that these selections are called, "complications." Tapping the button brings up the next screen view, with multiple app choices that you have to manage with your fingertip. These floating, moving icons are easily missed by a big finger, as you can see on the right. The icons refer to apps from both the native operating system, and a few that have transferred from your iPhone. Each one needs to be set, customized, and tweaked--once you figure out what they do. I expect that many people will not want to own an Apple Watch because it is so complicated to learn.
To add insult to injury, the Apple Watch operating system is susceptible to the hated "updates," which require a long download to your iPhone, followed by an upload to your watch. This procedure takes you, your Watch, and your iPhone out of circulation and near each other for almost an hour.
Perhaps the most annoying thing about the Apple Watch, though, is that it is a slave to the Apple iPhone. Although the fitness and heart rate monitors are inherent in the Watch itself, most of its other information-gathering functions necessitate that it stay within Bluetooth range of the iPhone. "Nearby" means purse, pocket, or the next room. Continuous Bluetooth connectivity requires continuous power, so the batteries of both the Watch and iPhone run out quickly. This short battery life is universally considered one of the most annoying features of this equipment. And of course, charging the Watch requires yet another type of Apple connector to add to your collection.
Many of the Apple Watch frustrations are due to the apps transferred from your iPhone. To paraphrase Samuel Johnson, it runs these apps like a dog walking on its hind legs. "It is not done well; but you are surprised to find it done at all. " Some are great, others leave a lot to be desired. For example, Yelp works surprisingly well finding local restaurants and positioning them on a local map--also displayed on my watch--but Fandango does not list the local show times, instead displaying only purchased tickets, if any. The airline apps give accurate updates on your flights, but the boarding pass is useless because the display goes blank when the screen is turned upside down to scan for boarding. The German English Dictionary from BKis amazing, as it converts speech into written German, handy when traveling abroad. The Watch will stream selected iTunes to a Bluetooth-enabled headset or car radio. On the other hand, it will not stream podcasts. For those of us Public Radio aficionados who enjoy Fresh Air, and Car Talk, this is a major disappointment.
Don't get me wrong. The Apple Watch is a beautifully-engineered instrument representing a quantum leap in the world of small, personal electronics, in much the same way that the iPod changed our music habits, and the iPhone our cellular phone habits. More than that, though, the Apple Watch represents a paradigm shift in how we use wristwatches. There may come a time when "Wristwatch" will be a thing of the past, much like a dial phone.
In fact, it's beginning now. Last month, LG released their cellular watch, the LG Watch Urban 2nd Edition, which they call a Smartpiece. Though it functions like an Apple Watch, including cellular phone capability, the LG has its own cellular modem and SIM card, so it does not require a nearby smart phone or a Wi-Fi connection. The LG Watch Urbane 2nd looks more like a traditional watch, though its price will be closer to an Apple than a Swatch.
My mindset has changed been changed by my Smart Watch. I would be hard-pressed to put it aside and go back to a traditional timepiece. The instrument has become well-integrated into my daily routine, and I rely on it to check the outdoor temp, monitor my activity, or take a quick glance at messages without retrieving my phone. I would be hard-pressed to stop using it. I miss my old watches but they seem so, well, one-dimensional.
And after all, the Apple Watch keeps time, too.
The image of Dick Tracy is licensed under Fair use via Wikipedia - https://en.wikipedia.org/wiki/File:Dt2wrr.jpg#/media/File:Dt2wrr.jpg
Targeted Cash Transfers
We're trying to get the biggest bang for their buck--or rather biggest buck for their bang. Or you can say we're helping them pass the buck? Yes-yes--true--I'm a bit of a smart alec-- at Boarding School everyone thought I had the comic's gift. I would agree. Wouldn't you? Doesn't matter what you think as long as the money keeps rolling in.
But seriously, do you think that just before he slits my throat he'll think of this: How I compared his mother to fifteen goats? Is that what you're asking me? Come off it--my friend---- he's going to be angry yes—He's going to wonder why I didn't compare his beloved Ma to fifteen cows—cows fetch 20,000 rupees apiece. Livestock. So get your facts straight—its cows not goats. Each person in these parts who is killed by a drone attack is equivalent to the rupee price of 15 cows. Converted into dollars? At today's rates?---$194—that's about 38 chai lattes or thereabouts at your nearest café in Washington DC------You see if a civilian is killed we pay three hundred thousand rupees. If a cow is killed we pay 20,000 rupees. My wife's outfit yesterday—the one you liked so much—the one you want made for yourself---well that cost me 450,000 rupees—yes—four lakhs. So you see, at 300,000 rupees it's a bargain. And we keep the costs down. We have a budget, a set quota for 2000 civilian deaths. Anything ---or rather any more bodies-- over this allocated amount, we just categorize them as militants. That way we don't have to pay. You see? Nice huh? Just an accountant's little trick of the trade. But my God! What a headache to get the numbers right on this. Hours—days of negotiations man—We finally gave them a list---how much for what they target---destroy—working age man, woman, pregnant woman, elderly man or woman, child, girl child, boy child, fetus, baby, goat, chicken, cow—concrete house, mud house, number of rooms, vehicle, what kind of vehicle—farm tools---and so on and on. It's a long list. We perfected the system—this cash transfers system—after the big Earthquake. Gave out money for the dead.
And you have no idea what a headache it is—on the other side of this-----these—our people are such cheaters, such brigands such liars….they will inflate their quotas of dead you see---no willingness to stay within the prescribe quotos---no principles at all---no integrity—they'll tell us that so many children have been killed---or so many women—we have no way of verifying how many women were inside those damn mud huts and so we have to rely on their word. Their word!!!! Makes you want emancipation doesn't it? These bloody losers keep them locked up----we can't even get an honest head count. And after a house is blown up how are we supposed to know how many were in there, right! No sense of setting down a principle based on science or math--just emotions! I tell you literacy my friend is important! Number one priority once we are in the clear.
So where was I? Yes if a civilian is killed we pay 300,000 rupees. It used to be 500,000 rupees. But we simply don't have the funds. We keep asking the Americans to pay but they say they are not made out of money---aren't the weapons--the bombs enough?
If a civil servant is killed—well then that fetches in millions--- 3 million rupees for grade 16 officer its 4 million rupees for grade 17 officer 5 million rupees and so forth, it is 10 million rupees for grade 20-22.
If a big animal is killed its 20,000 rupees. If a van, or a car is destroyed its 200,000 to 300, 000 rupees or more.
It's very detailed you know it's all written out—regulations, processes and procedures. The Civil Service is superlative in details. A national treasure—the civil service. Thanks to the British…And the railways. The British were marvelous you see---so methodical. Everything under rules and regulations. Not like today, anything and everything goes. That's the difference between Empire—Imperialism—Order backed by divine right----and--- THIS---this----not sure what to name this yet. Jury's out on a name for it.
But enough of this boring talk about cash tranfers---come see the new digs yaar----I've been doing well! We've built a Farmhouse--the works--olympic size pool, citrus orchards, stables---you name it- --it will make you drool. I can't understand why anyone would ever want to leave their own country when its a gold mine!
Monday, October 26, 2015
David Michalek. After Muybridge. From the Figure Studies
"Figure Studies, second work in the ACTION/FIGURES series, applies high-speed video to the recording of specialized and non-specialized human movement. Figure Studies builds on a previous work from 2007, Slow Dancing ..."
What does it mean to stay ‘Present’? Can we control our thoughts?
by Hari Balasubramanian
This piece is framed as a 'conversation' but it is really a conversation or debate between two voices/perspectives in my own head (here's a similar piece from last year).
"There's a lot of talk these days about 'staying in the present moment', 'being mindful', etc. I find it all quite puzzling. Because it doesn't matter what is going on or what I am thinking, I am always in the present – isn't that the case?"
"Well, I find myself usually thinking of the past or projecting future scenarios…"
"Sure – that's true for me too. But isn't it true that thinking of the past or the future also happens the present? A memory of the past is somehow retrieved now in our mental space and we say we are thinking of the past. The screen on which the past unfolds or the future is projected is always the present."
"You can get very technical about it if you like. The idea of being present is simply to clear your mind of unnecessary and – on many occasions – troublesome thoughts which keep taking you on needless mental journeys."
"Okay – then what remains when your mind is clear of thoughts?"
"I guess you experience sensory stimuli going on right now – you feel how cold the wind is, or how red that piece of cloth is, how bitter the coffee is and so on."
"And why are these sensory perceptions more special than thoughts of the past or future? Isn't the feeling that the coffee is bitter a kind of thought too – you taste the coffee and something in your mind, some kind of past knowledge or memory, learned or ingrained, but which is still thought, informs you that it is bitter."
"At least it is more immediate…"
"Yes, but the present is already the past by the time you label something. Thought is always one step behind whatever is unfolding…"
"But you are missing the bigger point. You can get too preoccupied with events in your life that have already happened and cannot be undone, and you can get excessively anxious about what might happen tomorrow; you can get into a cycle where such thoughts keep running in your head, like a broken record. So it isn't useful to encourage this type of mental activity."
"I don't disagree with what you say. Though I must note here that thoughts of the past can also be essential – without them you can't learn and function intelligently. And thoughts of the future allow you to plan ahead – again very essential in daily life, work etc. To say that these are to be avoided is not correct. But I do get the drift of your argument: we dwell in thought patterns that are not efficient, that do not contribute to anything but take up a lot of energy, and being present is a way of keeping them off…"
"Yes, that's exactly right…"
"Let's explore this further. How exactly does one stay in the present?"
"Well, the typical method in so-called meditation or mindfulness exercises – and I am not saying these are the only methods or that they are correct, these are just examples – is to focus on one thing, an image, an object, your breath. Whenever you find yourself lost in chains of thoughts, bring your attention gently back to the object of your focus. The object of focus is your proxy for the Now. Following the breath, for example, is like following the Now. The most open-ended technique, which is the hardest to convey, is to be simply be aware that you are aware. Every time you are lost in some mental jumble, you simply become aware of it and thereby create a pause. The method does not matter so long as you can maximize the space between thoughts – it is in this space that you are present."
"I have tried these techniques that you mention. Whatever their new-agey or mystical connotations, they are at heart very empirical exercises. They are empirical because you put your own mind, which is really a bundle of thoughts, under close observation (although, crucially, what remains unresolved is exactly who the observer is). You find some very interesting things. You mentioned ‘chains of thought' and I think it's an apt term. Literally, thoughts can be ‘chained' to one another. You have thought T1, followed by thought T2, followed by thought T3 and it just keeps going on this manner. Before you know it, you may dozens of linked thoughts!"
"What are these links you are referring to?"
"Well, I've noticed that successive thoughts have something in common. I look at the red blanket on the bed, which in turn reminds me that the blanket was purchased in a store in India, which in turn links to a conversation I had with a close friend the exact day that I purchased the blanket. That conversation was about the upcoming elections in India, which in turn leads to a thought about the current Prime Minister, Narendra Modi, and how things have gone since he was elected. So from a blanket, I've jumped to a political figure and the state of the entire country in the space of a few thoughts!"
"Yes, this goes on until you shake yourself off these thoughts and return to the present. Or something from the present catches your attention – say your phone begins to ring…"
"You could put it that way. But I am not so sure. The fact is that the images, perceptions, thoughts and sensations you are experiencing always change. This constant change is what we conceptualize as time. When your attention switched from chains to thought to, say, the abrupt ring of your phone, what changed was only the content; you like to call one as ‘Present' and the other as ‘Not Present', but as I see it, it's all the same. When a movie is projected, the screen shows all kinds of things, things you like and things you don't like, but the screen itself is neutral to the content. In the same way the Present – which is really another name for consciousness – cannot really be pinned down or described (after all if everything changes continuously then it's clear that even a discrete moment does not exist); yet the Present is somehow is the ‘no-thing' that enables the ever changing thoughts or sensations or perceptions that each individual experiences."
"That all sounds very nice, but are you saying that since everything is in the Present, we should give in to whatever thoughts that come and make no effort to exert some kind of control?"
"I am only asking: can we control or choose what thoughts come to us? It seems we can control, to some extent, our reaction to a thought once it has appeared – even of that I am not 100% certain – but can we control what kinds of thoughts we will have? Sometimes we are able to consciously generate a thought, but for most of the time, waves of thoughts seem to come to us out of who knows where!"
"That is correct – we seem to have some control only in retrospect, after a thought has begun to arise. But the reaction to a thought or reaction to anything in your conscious experience – that's very important, that's where there appears to be some choice."
"And yet, the reaction to a thought is a thought too, isn't it? When you want to stay ‘present', what you really want is to generate a counter-thought, a kind of reminder, which stalls your current chain of thoughts. In other words, you want one type of thought in your mind to restrain/tackle other types of thoughts which arise from the same mind. This is – as one Indian saint noted – like making the thief the policemen, and this so-called policeman will go with you and pretend to catch the thief. Naturally, in the end, nothing will be achieved!"
"If nothing can be achieved, then why do meditation and mindfulness have so many beneficial effects, which many swear by and which even scientific studies are now confirming?"
"I cannot speak for others of course. At the end of a meditation session I too always felt good. But then one day it struck me: does it feel good because the struggle to keep track of and control thoughts is over? If meditation has any use, it is to demonstrate that the notion that thoughts can be controlled may be an illusion. This realization is of great value because if there is very little I can do, then why bother, why not just take it easy! Let thoughts come and go as they want! If a thought or an emotion demands my attention, I will give it attention and when the time comes, the thought will eventually go. Paradoxically, such an acceptance of thoughts may actually slow things down and bring about a deeper relaxation. But all this that I am saying is just my own experience. It is always easy to talk intellectually. What happens in practice is quite another matter!"
The connectedness of things
by Sarah Firisen
The week before last I changed the sheets on my bed. Stripped the fitted sheet, the pillow cases, bundled it all up in my arms and threw it in the washing machine and turned it on (I’m lucky enough to have a washer/dryer in my apartment in NYC). About 3 minutes went by, maybe 4. I suddenly felt that something was wrong, something was missing, I looked on the kitchen counter, on the coffee table, ran into the bedroom and looked on my bedside table, but the sickening feeling in the pit of my stomach told me what I already knew; I ran to the washer, opened up the top, reached inside, felt around and there it was, my iPhone. The sheets weren’t soaked, but they were pretty wet, a decent amount of water was already in the washer. I knew the drill from when my daughter had dropped her phone in the toilet, but in the panic of the moment there were steps I forgot or overlooked. Luckily I had bulked ordered Arborio rice (I like risotto) and so quickly dumped 3 bags worth into a bowl. Took the phone out of its case, which in this circumstance had probably done more harm than good, trapping the water nicely. Put the phone in the rice, put the bowl in a warm dry spot as dictated by the various guides to such things I found on the internet, which luckily I could still access via my laptop and prayed. My daughter scolded me – “and you know, you have to wait at least 72 hours!!” Her concern was hardly selfless; the plan was that when I was eligible to upgrade in just over 6 weeks, she’d get my old phone to replace her almost totally defunct iPhone 5.
Those 72 hours were hell. I have no house phone, so no way to call anyone and even if I did, I don’t know anyone’s phone numbers except my aunt and uncle in England because they’ve had the same phone number since I was 7 and my ex-husband who’s had his mobile number at least 10 years.
I do everything on my phone: banking, airline check-ins (I fly a lot) and boarding passes, pay my rent, stay in touch with loved ones, read the New York Times and the New Yorker, read books, and listen to music. Without it, I’m ashamed to say I was bereft. I couldn’t work out because I really need to listen to music to motivate me and I had no way to do that. Could only communicate with friends and family through email and Facebook from my laptop which left me rather housebound. And there are some personal details at the intersection of personal hygiene and technology that I can’t even bring myself to report with greater clarity. Suffice it to say, I was lost. It was a very long 3 days. Actually more like 2 ½ because I cracked around 11am Monday morning, took it out of the rice and tried to turn it on.
The Genius at the Apple store later told me that this rice “miracle” is bogus. That any phone that turned on after it, probably would have after 3 days left alone on a table to dry. I don’t know. What I do know is that my phone was dead as a dodo. It was still under a limited warranty and so for $300 I was able to replace it with an identical phone which I will use until I’m eligible for my upgrade in few weeks and can hand this phone down to my equally technology addicted teen.
The relief when I held that replacement phone in my hand is truly shameful to relay. The past few days it had felt like I was missing a limb. I know that somehow we used to navigate our way around without mobile mapping apps, I just have no memory of how we did that. How did we find places to eat and shop without Yelp? Was I going to have to search under my bed for my TV's remote control now that I couldn’t use my phone to control it? Over that weekend, I’d had to cancel social plans with friends because the concept of making a plan that then couldn’t be modified at all because I’d be unable to receive those last minute “just running 30 mins late”, “could we move this to the Village” texts was inconceivable to all of us. How did we used to do this? I guess we made plans and then didn’t change them? I just can’t remember exactly how it worked.
I’m writing this on a plane on my way from JFK to San Francisco. The Wi-Fi isn’t working and the apologetic tone that the attendant used to announce this used to be reserved for relaying that the toilets wouldn’t flush or that they’d run out of food. People audibly groaned. I felt myself twitching with anxiety. 5 and a half hours. A lot could happen in that time. What if my boss needed me (and it’s Sunday)? My kids? My boyfriend? How would I know that Amazon had delivered my package and that my payment had been received by Chase?
I’m on my way to San Francisco to run a leadership development meeting focused on disruptive innovations and the impact they’re likely to have on my company’s clients and beyond. One of the trends we’re looking at is the Internet of Things. This McKinsey article states “bottom-up analysis for the applications we size estimates that the IoT has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025. At the top end, that level of value—including the consumer surplus—would be equivalent to about 11 percent of the world economy”.
The internet of things, where a bunch of things will be connected to each other, has its detractors and its promoters. Is it totally overhyped or a concept whose surface we haven’t even begun to scratch? “The IoT will fundamentally alter how humans interact with the physical world, and will ultimately register as more significant than the internet itself.”? I err on the side of believing the latter. If the history of the last 20 years has shown anything, it’s that most of us fundamentally underestimated how dependent we’d become on technology and its ability to make our lives easier, more convenient, more immediate and more mobile. And these changes, this dependency has been moving at an exponentially fast pace; 6 years ago I had a Blackberry that really couldn’t do much more than receive work emails. If I left it at home I barely blinked. I remember getting my first smartphone and the world of possibilities it seemed to open up to me. I had no idea just what it would bring.
I recently got a Fitbit. A lot of people I knew had them and raved. I wasn’t sure. Apart from anything else, I’m not someone who likes things on my wrists; I don’t wear bracelets and I haven’t worn a watch in over 30 years. The first couple of days it felt really alien. Now it feels weird when I take it off to charge it. Human beings are incredibly adaptable. It’s one of the reasons we’re still here. Not only have I got used to the feel of something on my wrist, I find I like being able to glance at my wrist to see the time (there’s a built in watch) rather than getting out my phone. It’s all very retro in a way, but it’s quickly become what I’m used to. It’s amazing what we can not only get used to but come to depend on. So much of what I’ve been reading about in my research on the Internet of Things and other technologies, 3D printing among them, seems like the stuff of sci-fi but actually they’re mostly not of the future, they’re of now. And will very soon be household concepts. Of all the things you can already do with 3D printers, perhaps the most fun is to print personalized gummies. I read this and all I could think of was the replicator in Star Trek. It’s coming. And soon. Gene Roddenberry should have bought stock in HP all those years ago.
I’m sure there are many people out there reading this who are far less dependent on their phones than I am, self-righteously judging me for my 72-hour freak-out. All I can say is, let’s touch base again in 5 years.
Monday, October 19, 2015
American Indian Political History
by Akim Reinhardt
The publisher's website for the book is here.
The book reconsiders the history of the Oglala Lakota people, the largest branch of an Indigenous nation commonly known as the Sioux.
The word "Sioux" is a French corruption of a 17th century Anishinaabe (Chippewa or Ojibwa) name for the people who call themselves either Dakotas or Lakotas. Whether one says "Dakota" or "Lakota" depends on which dialect of that language one is speaking or associated with; one of the main differences between the Dakota and Lakota dialects is the pronunciation of the letter D, which Lakota speakers pronounce as an L.
Of the seven Lakota-speaking groups, the largest are the Oglalas. From among the ranks of 19th century Oglala political leaders are some of the most famous Indigenous names in American history, including Tašuŋka Witko (Crazy Horse) and Maĥpia Luta (Red Cloud).
Pine Ridge Reservation, in the southwestern corner of South Dakota, has been home to the Oglala Oyate (nation) since the 1870s; prior to that the Lakota empire spread over much of the northern Great Plains. Pine Ridge Reservation It is also where Ĉaŋkpe Opi (Wounded Knee) is located, site of both a brutal U.S. Army massacre of approximately 200 Indigenous people in 1890 and an occupation by Oglalas and their supporters from the American Indian Movement (AIM) which was laid siege to by the federal government in 1973.
My first academic book, published in 2007, was a study of Oglala politics on Pine Ridge during the mid-20th century. This new book offers a more comprehensive political history of the Oglala Oyate.
Part I of Welcome to the Oglala Nation is a narrative essay that outlines Oglala political history from its origins as part of a Dakota/Lakota confederacy founded in modern-day Minnesota during the 14th century, up through the early 21st century.
Part II of the book is a collection of 60 historical documents that I selected and edited with an eye towards illustrating Oglala political developments during the past several centuries.
Part III of the book is a bibliographic essay on Oglala political history.
I am offering here an excerpt from the book's Introduction, which discusses the nature of historical research writing generally with regards to American Indian history more specifically.
From Welcome to the Oglala Nation1 . . .
The stories we tell about the past are important. But just as important are the ways we choose to tell them. Of course we must strive for factual accuracy. But beyond that, people also want their stories to have truth, and that is a very different thing altogether.
A story is not made up of facts alone. Facts are merely the building blocks we use to construct a story. The deeper meaning we gain from a story, our sense of truth, is determined by how we select, organize, and shape those blocks into something bigger. And so while many facts are indisputable, the truth is frequently up for grabs, depending on what exactly we do with those blocks.
At first, this may seem counterintuitive. How can the same facts reveal multiple truths, some of them at odds with each other? Yet differing interpretations of agreed-upon facts are part and parcel of storytelling. For example, it is not unusual for two people who witness or participate in the same event to offer radically different interpretations of that event. They may agree on what happened but stridently disagree about why it happened and what its implications and consequences are. Standing on the same facts, each person may tell the story differently to promote his or her own version of the truth.
In seeking to understand the past, writers and historians also offer differing interpretations after consulting the same sources. They may agree on what happened but disagree on what it means, on how to describe and interpret it. For example, consider a 2013 USA Today newspaper article in which the writer referred to the nineteenth-century Oglala Laķota leader Maĥpia Luta (Red Cloud) as a "cunning and brutal warrior" and his followers as "bloodthirsty warriors." Another recent writer, however, describes Maĥpia Luta as a "spokesman," "statesman," "productive political leader," and "one of the most talented" politicians of his era. Two writers, drawing on the same wellspring of facts, can produce very different portraits of their subject.2
When it comes to American Indian history, non-Indians have long dominated the storytelling, and they have generally told certain kinds of stories, offering a peculiar version of the "truth." In film, television, literature, song, art, and even popular history, non-Native people have frequently used stereotypes and clichés to define American Indians. Native Americans are typically portrayed as noble and courageous people, full of mystical and spiritual insights, but condemned to an inescapable and tragic destiny. Cruder stereotypes include "bloodthirsty," "cunning," and "brutal."
The standard, pop-culture story line of American Indian history also has a very restrictive chronology. It goes something like this: For thousands of years before Columbus, Indians lived harmoniously with unchanging nature in a peaceful, Edenic wilderness. Paradise was lost with the arrival of Europeans, who had already eaten from the Tree of Knowledge and brought with them all the blessings and curses of civilization. And while Indians' simpler life allowed them to retain earnest nobility and poetic souls, they could not stand up to the technology, avarice, and unscrupulous methods of the more-sophisticated, dynamic, and ascendant Europeans. Thus were Indians doomed to lose a grand struggle that was as inevitable as it was tragic. But it was also for the best in some ways, because Indians were people of a bygone era, obstacles to the United States' starry-eyed destiny.
Throughout the twentieth century, that basic story line dominated how most Americans, and eventually the rest of the world, understood Indian history. And of course it is utter bunk. Steeped more in myth than any reasonable version of the truth, the popular story was incomplete, often inaccurate, and always self-serving. It relied on ahistorical and fatalistic perceptions of American Indian societies and people as static and lacking agency. At best it was a gross oversimplification of a long and complex history; more often it was simply wrong.
However, this version of the Indigenous past persisted and proliferated because Americans nearly monopolized the storytelling; such was the legacy of the U.S. colonial conquest of Indigenous nations. Facing few rhetorical challenges, Americans constructed a version of Indian history in which Indians were fated from the beginning. Through the selective use and misuse of facts, and even outright errors of fact, Americans developed an interpretation of American Indian history that bolstered U.S. national mythology. For it was often the case that when Americans seemed to be telling stories about Indians, they were really telling stories about themselves. For a long time, many Americans used the Indian past as a prop for reciting a story about the United States. In this way were Indians reduced to mere foils and tragic antagonists while Americans made themselves the stars of history. Indian defeat was necessary for explaining American victory, and Indian tragedy helped define American glory.
Americans largely misrepresented the past by casting themselves as superior winners and Indians as noble but inferior and tragic losers. This had the effect of glorifying the United States, but more importantly, it contributed to the continued persecution of Indigenous people by advancing a truncated and perverse version of Indian history. The misuse of Indian history has been part of the ongoing repression of Indian societies and cultures.
History, whether it is told well or poorly, affects how we understand the present and suggests what tomorrow should look like. People usually consider the past while attempting to understand the present and develop policies and goals for the future. History is not the private domain of professional historians. Rather, perceptions and misperceptions of history inform the decisions and actions of politicians, business leaders, cultural figures, and other influential members of society.
After the physical conflict of warfare had largely ended in the late nineteenth century, U.S. politicians, businesses, churches, artists, and individual citizens continued to attack Indigenous cultures, societies, governments, and economies. As the twentieth century opened, persecution continued in renewed forms. Sometimes these attacks were overtly hostile or indifferent to Indian people. Sometimes non-Indians were even motivated by good intentions, although such intentions were often corrupted by problematic and self-serving interpretations of American Indian history. Either way, the damage inflicted was real.
And so it is that the way we tell a story ultimately becomes part of the story itself . . .
The complexities and nuances [in a truer version of Indian history] are vast. For examples, Indian nations often competed with each other as much as, or even more than, they competed with European and American invaders. And although Americans eventually took control of what is now the United States, Indians greatly influenced the outcome, both for themselves and for Americans. Furthermore, hundreds of Indian nations persist to the present day, and their history did not come to a tragic end or neatly demarcated conclusion at any point during the nineteenth century. . . .
There are of course many setbacks in the story of the Oglala Oyate (nation), including the types that are often recounted in popular histories of Indian people. However, the story of the Oglala people is also one of triumph and perseverance. From before contact with Europeans and Americans, and ever since, Oglala Laķotas have conducted relations with other nations, either in alliance or in competition, sometimes gaining the upper hand, and sometimes not. The Oglala nation has also continued to conduct its own affairs, even when its ability to do so has been limited by U.S. colonialism. But despite history's many twists and turns, the Oglala nation is still here. This book offers one of the new ways to understand their story.
1Most citations in the published text have been removed from this excerpt for the sake of brevity.
2Don Oldenburg, “A Brutal History of Red Cloud and the Indian Wars,” USA Today, November 17, 2013; Robert W. Larson, Red Cloud: Warrior-Statesman of the Lakota Sioux (Norman: University of Oklahoma Press, 1997), 304.
The Intriguing Case of GK Chesterton (And Other Would-Be Saints)
Not far from Amman, located just outside the city of Salt is the shrine of the Old Testament Prophet Joshua. It is a simple building containing nothing but a tomb. But what a tomb it is; for at about ten meters long, it makes quite an impression!
Indeed, ever since visiting the Tomb of Joshua, I've come to feel that all saints' tombs of saints should be super-sized like that.
It's not just saints either. Throughout Asia, one can follow the trail of the gigantic footprints of the Buddha ("Buddhapada"). These were the first "relics" of the religion before the rise of Greco-Buddhist art. From Japan to Sri Lanka, these monumental footprints abound and some are the size of a bathtub!
It is, you have to admit, somehow pleasing to see the great stature of these saints and sages reflected in their great physical size....a kind of inner greatness reflected in their after-impressions....
This larger-than-life quality is just one of the myriad of things I like about GK Chesterton. Not one to be outdone in anything, the prolific British writer had a massive final resting place. Like the prophet Joshua, his gigantic coffin was so huge that they simply couldn't get it down the stairs and out of his house for the funeral! Chesterton was, it seems, enormously fat. But as this wonderful old article in the Atlantic has it, this shows you how levity meets gravity-- for he was in many ways a man of Biblical proportions!
Speaking of which, have you heard the Catholic church has opened an investigation into a possible case for his canonization?
What is it about Chesterton?
Nowadays, I think people know him best for his Father Brown detective stories (I personally much prefer Akunin's absolutely brilliant Sister Pelagia stories). While mainly overlooked now, there was a time when Chesterton was considered one of the greatest minds (and writers) of his day...But that is not what draws me to him. What I have always loved about him is the way he shares in some of the medieval predilections that I tend to admire in a man.
In July, I wrote in these pages about some men I've long admired; men whose commitment to the glories of the past were only to be outdone by their fierce resistance to the "modernisms" of their day.
Mi Fu, for examples, like nothing more than walking the streets of the Song dynasty capital in the fashions of two hundred years prior; while his beloved Emperor Huizong devoted himself tirelessly to the uncovering and study of ancient bronzes. Their eyes were strictly focused back in time--to a perceived golden age that was firmly rooted in the remote past.
CS Lewis and Tolkien were like that--as were the famous Catholic converts of pre-War England, Evelyn Waugh, Graham Greene and yes, GK Chesterton. All of these men were not just fascinated by the past-- as they were actively engaged in resistance to their present reality. Their conversion to Roman Catholicism was greatly informed by this retreat into ancient ritual and traditional practices. And, in England, such conversions made front-page news.
Why would anyone leave the Church of England, was the question.
The authors themselves explained it in essays and articles for the British papers. Theirs was a reaction, they declared, against the mechanistic, capitalistic, aggressive age they had come into. Evelyn Waugh, with his ear trumpet and hatred of the telephone, called this "the chaos of modernity." This was the world that was in many ways inherited by Margaret Thatcher and Ronald Reagan and then passed down to us today. Utilitarian, money-oriented, and lacking in enchantment, is it any wonder they withdrew intellectually, turning instead to such obsessive pursuits as poetry, quests, saints, and unicorns?
To respond to the front page headline informing of his scandalous conversion, Evelyn Waugh explained it like this in an article for the Express in 1930:
"Today we can see it on all sides as the active negation of all that Western culture has stood for. Civilization - and by this I do not mean talking cinemas and tinned food, nor even surgery and hygienic houses, but the whole moral and artistic organization of Europe - has not in itself the power of survival. It came into being through Christianity, and without it has no significance or power to command allegiance. The loss of faith in Christianity and the consequential lack of confidence in moral and social standards have become embodied in the ideal of a materialistic, mechanized state . . . It is no longer possible . . . to accept the benefits of civilization and at the same time deny the supernatural basis upon which it rests."
Chesteron was huge in all this. For example, it was the publication of his book Orthodoxy that influenced such thinkers as Evelyn Waugh. It wasn't just Waugh either for his work had a profound influence on other converts, including Edward Sackville-West and even Anglican thinker CS Lewis, who declared he was an atheist before reading Chesterton's The Everlasting Man. His bon-mots and famous quotes were such that there even now exists a dictionary of "Chestertonitions." ~~But a saint?
The possible case for Chesterton began in 2013, when the president of the GK Chesterton Society, Dale Ahlquist, suggested the idea to Bishop Peter Doyle of Northampton, England. It was then Doyle who made the formal request to open the investigation for cause. As it happened, a year earlier, a very conservative Catholic archbishop in the US, utterly surprised people in this country by championing the case for the fiery left-wing social activist Dorothy Day to be canonized. An unexpected choice for such a conservative archbishop--and yet, no one could deny Day's tremendous work with the poor. Even in common speech, one wants to call her "a saint."
I recently read a short but interesting book, called the Tumbler of God, by Robert Wild. Also about the case for Chesterton, Robert Wild unpacks the idea of the mystic. We commonly think of mystics as those people who see visions or who undergo great trials for their beliefs. Finding their God within themselves, they tend to be inner-focused and they are known for their mystical visions. Robert Field believes there is another kind of mystic. And that is a mystic whose mystical vision is of the world.
Kafka once said that GK Chesterton was such an incredibly happy person that you could almost believe he really had discovered God. Well, Robert Wild thinks he had.
All this was on my mind during the Pope's recent trip to the US, with his mention of two other great contenders for sainthood, Thomas Merton and Dorothy Day--for they too were deeply influenced by the words and life of GK Chesterton.
In his speech, Pope Francis held up Day and Merton as "models of Christian living anticipated what was made universal by the Second Vatican Council and expressed in Gaudium et Spes, that Christians are called to interpret the “signs of the time” in “light of the Gospel.” To be called to interpret but more to spread the light of the Gospel would be perhaps closer to what is meant.
And this is an important point. When Pope Francis visited, like a lot of people I felt unnerved by the media portrayal of the Pope--as Good Guy or Bad Guy, depending on your political persuasion...As if the entire world has to be slotted into the left or right box of American politics. People seemed to momentarily forget that Pope Francis is not an American politician or public intellectual; nor is he in any way a part of the bipartisan politics of this country. Rather, the man is the Bishop of Rome and spiritual head of the Catholic Church.
This was all made very clear with the canonization of Junipero Serra. From all standards today, the man really cannot be held up as particularly exemplary, can he? In fact, of all the would-be contenders for sainthood, it is Dorothy Day alone who I think comes closest to the ideal (my ideal?) of a "saint." But in the end, the church does what the the church does and it's not my business. I just happen to love the writing of many of these British "converts;" my own personal favorite novels of this genre being Waugh's Brideshead Revisited, Helana and the amazing Anglican "returnee" Rose Macauley's Towers of Trebizond, which stands as one of my favorite novels of all time.
For what it's worth, Chesteron had some wonderful words to say about saints:
“The saint is a medicine because he is an antidote. Indeed that is why the saint is often a martyr; he is mistaken for a poison because he is an antidote. He will generally be found restoring the world to sanity by exaggerating whatever the world neglects, which is by no means always the same element in every age. Yet each generation seeks its saint by instinct; and he is not what the people want, but rather what the people need . . . . Therefore it is the paradox of history that each generation is converted by the saint who contradicts it most.”
Like the wonderful statues of the various Bodhisattvas from Japan offering blessings is the form of medicine (held in beautifully shapely jars), I agree with Chesterton that saints could serve as a kind of medicine for the world. Writing hundreds of thousands (if not millions) of words--all dedicated to the wonders of being alive, Chesterton offered medicine in the form of an almost radical happiness and hope. Unendingly devoted to enchantment and play, Chesterton's vision does have this incredible medicinal quality.... he is like a 300 pound happy pill.
The true object of all human life is play. Earth is a task garden; heaven is a playground.
Chesterton wrote tirelessly that if one could only retain the innocent sense of play and wonder of the child, they could feel the great wonder that is our world. And in that way maybe he really does serve as a kind of anecdote for the joylessly mechanistic and oftentimes wonder-less world in which we find ourselves? I, for one, stand with Chesterton for saint!
Ironically, though, after the Second Vatican Council, while Day and Merton were no doubt pleased, one cannot help but wonder if the two aristocratic-loving writers, Chesterton and Waugh would not have retreated back to the Anglican high church, though...since maybe rather than God, it was Beauty and unchanging tradition toward which they were gazing...?
Or as AN Wilson once declared, they might have had to turn to the Koran instead.
Top Image: Madresfield Hall (The blueprint for Brideshead) and the Kudara Kannon
Recommended: The Problem with being Spiritual but not Religious
Rashid Arshed. ID-eighty-five.
Have the Internets Rotted My Brain and Wrecked My Mind?
What about television? The movies? Or, even, heaven forefend! the book? But then, what is a mind that it can be wrecked, or not?
The brain we know about, and what rots it. Cancer rots the brain. So do drugs of the wrong kind, and there are lots of wrong kinds, though there’s some dispute on specific drugs. But the mind, it’s not at all clear that the mind is the sort of thing that can rot, at least not literally. Metaphorically, sure.
But why would I even entertain the idea – and that’s all I’m doing, entertaining it – that the internets have wrecked my mind? Well, for one thing, I’m a mature adult and for several years now I’ve spent several hours a day, every day, not only working at my computer – which, in some uses is no more than a glorified typewriter – my cruising the web. And I rarely read an article all the way through. I’ll read a couple of sentences, maybe a few paragraphs, and then move on. In some cases I’ll link the article to my Facebook page or even cut, paste, and comment a post to my home blog, New Savanna.
And then I’m back on the prowl, checking out my Facebook friends, seeing how many hits I’ve gotten at Flickr, how’s the conversation over there at Crooked Timber?, any useful stuff at 3 Quarks Daily?, what’s the gossip on JCList? And so it goes. For hours. Everyday.
And you can’t even write a proper sentence! What kind of sentence is that: “For hours”? Where’s the verb? The subject?
Chill, dude. That’s not a sentence. It’s a mind fart.
Mind fart!? What kind of language is that for an intellectual publication like 3QD!
Dude, 3QD is on the internets, yo!
You see the problem/evidence, don’t you? Distracted. Can’t hold a line of thought for more than a minute or three. Sure signs of internet-induced mind rot.
I mean, it used to be that I’d read several books a week. I am, after all, a Ph. D. scholar in English literature. Do you know how many books you have to read, cover-to-cover (no Cliff’s Notes), to get that kind of degree? Lots and lots. What’s worse you have to think about what you’re going to say in seminar even as you read the (damn) book.
Do you have any idea what that’s like, you’re knee deep in Wuthering Heights – forget Heathcliff, he’s no good for you! – and you pause to make a note in the margin – man hangs dog – so you’ll have something to remark about for seminar. Except it’s hardly ever necessary. Like, the idea of seminars is you discuss things. [We’re not undergraduates now, getting lectured at by the Great Scholar.] But that’s not how it worked.
How it worked is that you read the book for the week, plus a critical essay or three, get prepared to say something, and then you sit around the table, you and half-a-dozen to a dozen other graduate students. You just listen to the professor ramble on and on and on for an hour-and-a-half. Then maybe there’s fifteen measly minutes of discussion. If that.
And the guy – it was mostly guys, in my case, it was all guys, though there WERE women in the department – doesn’t really prepare. It’s like he’s got this captive audience and by gum he’s going to try out his latest ideas on us. Why? Because no one hardly ever gets feedback on journal articles and convention presentations are hopeless. These graduate students are my only chance for a live audience.
But I digress. This is about me, and my distractions, not my long-ago professors and their thwarted desires.
As I was saying, however many books I’ve read in the past, I hardly ever read a book cover-to-cover these days. I figure I’ve read 2/3rds, maybe 3/4ths, of Peter Gärdenfors, The Geometry of Meaning. That’s about natural language semantics, a long-standing interest of mine, and very good. Essential, I’d say. But I didn’t need to read the whole thing, at least not now. Maybe later. I think I read all, or at least 90% of, Tim Morton’s Hyperbobjects. Lot of markings in it. Reviewed it right here in 3QD. Good book, very good book. Not quite my cup of tea, but then I’m not the measure of all things, am I?
And I’m damned sure I read all of Matt Jockers, Macroanalysis. Not only read, but studied it. Blogged up a storm about it. Mentioned it here in 3QD as well.
So that’s three books in the last few years. And big hunks of Alex Mesoudi, Cultural Evolution: How Darwinian Theory can Explain Human Culture and Synthesize the Social Sciences. [Really? All that in one 250-page book? That’s frightfully ambitious, no?]
That’s not much for a professional – if unemployed – humanist. Not over the course of two or three years!
There’s no fiction in there, and me, a scholar of literature! A year or so ago I started out to blog my way through Goethe’s Faust. I’d read it my freshman year in college (Johns Hopkins). Took a course on it. Wonderful book, great teacher, Harold Jantz. Markings all through the book.
But I was only able to blog my way through the first part. I just wasn’t interested in the rest. Though, who knows, I may return to it. One of these days.
And it’s a classic.
I’ve binge-watched Breaking Bad, House of Cards, and most recently, Narcos. I just purchased Mad Max: Fury Road from iTunes so I can watch it again (which I did last Thursday evening) and again. Perhaps I’ll even study it, and the various "making of" extras. But Faust? Not so much.
You see why I’m entertaining the idea that the internets have rotted my brain? I’m not doing the things I used to do, the literate things, the book culture things. I’m watching movies, and YouTube. Lots of YouTube. Mnozil. Japanology. Victor Borge. Count Basie. And on and on.
But it’s only entertainment. I don’t really think my mind has rotted – though I suppose there are those who think it’s never been anything but. I’m just not doing the things I used to do. The web is mixed in with this, but I can’t really blame it all on the internet.
There was a time, long ago in the 1970s, when my father told me that he didn’t like to read as much as he’d used to. Really couldn’t finish a book. He was a reader. Had inherited lots of books from his father. Read to me when I was a kid: Treasure Island, Huckleberry Finn, Moby Dick – now that I think of it, I suspect he skipped some sections of that one. The classics, books he’d read when he was young. I loved those bedtimes.
By the early 70s he was in his early 60s and couldn’t read like he’d used to. [And why he remark on that to me?] He’d retired by this time and I was in graduate school. Figured maybe he just wasn’t reading the right books. Got him a copy of Humboldt’s Gift, later Ironweed. Don’t think he ever read them.
At the same time, much to my surprise, he seemed to spend a lot of time each day playing solitaire. Solitaire! Why? Though I didn’t know it at the time, it seems that a lot of workers have trouble adjusting to retirement and so, as this article in The New York Times indicates, those who can afford it now hire professional retirement coaches to help them make the transition.
Could it be when you’ve spent so much of your adult years in work mode, that you have trouble getting out of it when you no longer have to work? That would explain my father’s apparent addiction to solitaire, which lessened over time, though he did play it until the end of his life over two decades after he’d retired. Is that – decades of work – what had bled him of his love for reading?
I don’t know.
But that won’t explain why I seemed to have lost interest in reading books and, in particular, in reading fiction. For, while I have worked off and on my entire adult life, it WAS off and on. However, I’ve also pursued my intellectual interests, off and on, and more on that off in the last decade. [I was once upon a time employed as an academic. But that was long ago, though not so far away.] I haven’t spent such a large portion of my adulthood doing someone else’s bidding, 9 to 5, five (sometimes six) days a week, 50 weeks a year. As recently as five years ago I read a fair amount and for hours at a time.
Now, as I’ve said, not so much.
But I’ve watched a lot of movies and TV, mostly at home though Netflix, at first through the mail (DVDs), more recently online streaming. There was a period of several years where I watched a lot of animation (aka cartoons), mostly but not entirely, anime.
See, I told you. Brain rot. TV’s rotted your brain.
Bullshit! My brain’s just fine.
The seductions of the visual have made you lazy. Dulled your critical capacities.
What’d I tell you?
Yet if my brain’s rotted then how is it that in the last five and a half years, since I started New Savanna, I’ve posted almost 3500 entries to the blog? To be sure, a lot of them are just photos with little or no prose. But some of them have been substantial written pieces.
It would appear that even as I’ve lost interest in reading, I’ve become more intellectually productive. Is there a causal relationship between the two? I don’t know. But you can’t write as much as I have over the past five years with a rotted brain. Regardless of exactly what I write, the activity itself demands sustained attention for several hours a day, from one day to the next. Sometimes in the middle of the night and in the early morning. Can’t control the urge to write.
Here’s the interesting thing. Look at this chart, in which I’ve plotted my monthly blog entries since I started New Savanna:
It’s a very spiky pattern, with the dips happening in the winter months. Ever since I went away to college winter has been a down time for me. Am I afflicted with seasonal affective disorder? I don’t know.
Look at the recent end of that chart (to the right). There is a dip, but it’s not nearly so deep as in previous years, and the lowest three months aren’t in the winter, but in the spring and early summer. Wrong season. The pattern seems to be different.
And yet, I’m not all that interested in reading long pieces. Not fiction, nor all that much non-fiction. I read what I need to read so that I can write what I want to write.
My best guess is that I’ve moved into a different phase of life, one where intellectual productivity is taking precedence over absorbing what others have written. Those seasonal down times – an informal term I prefer to the more clinical “depression” – have been I take my mental ship into dry dock, as it were, and refit here, maybe even rebuild her, top to bottom, stem or stern. I’ve spent a adulthood of rethinking what I’ve learned in the past year. And now I’m moving from an annual cycle of building and rebuilding to one of … one of what? Building and building and building?
Don’t really know.
It’s just started.
Monday, October 12, 2015
Adrian Villar Rojas. The Most Beautiful of All Mothers. Istanbul Biennial, 2015.
Also watch for: Mr.Rojas and Asad Raza's collaborative art project in New York in November, 2015.
Feel Our Pain: Empathy and Moral Behavior
by Jalees Rehman
"It's empathy that makes us help other people. It's empathy that makes us moral." The economist Paul Zak casually makes this comment in his widely watched TED talk about the hormone oxytocin, which he dubs the "moral molecule". Zak quotes a number of behavioral studies to support his claim that oxytocin increases empathy and trust, which in turn increases moral behavior. If all humans regularly inhaled a few puffs of oxytocin through a nasal spray, we could become more compassionate and caring. It sounds too good to be true. And recent research now suggests that this overly simplistic view of oxytocin, empathy and morality is indeed too good to be true.
Many scientific studies support the idea that oxytocin is a major biological mechanism underlying the emotions of empathy and the formation of bonds between humans. However, inferring that these oxytocin effects in turn make us more moral is a much more controversial statement. In 2011, the researcher Carsten De Dreu and his colleagues at the University of Amsterdam in the Netherlands published the study Oxytocin promotes human ethnocentrism which studied indigenous Dutch male study subjects who in a blinded fashion self-administered either nasal oxytocin or a placebo spray. The subjects then answered questions and performed word association tasks after seeing photographic images of Dutch males (the "in-group") or images of Arabs and Germans, the "out-group" because prior surveys had shown that the Dutch public has negative views of both Arabs/Muslims and Germans. To ensure that the subjects understood the distinct ethnic backgrounds of the target people shown in the images, they were referred to typical Dutch male names, German names (such as Markus and Helmut) or Arab names (such as Ahmed and Youssef).
Oxytocin increased favorable views and word associations but only towards in-group images of fellow Dutch males. The oxytocin treatment even had the unexpected effect of worsening the views regarding Arabs and Germans but this latter effect was not quite statistically significant. Far from being a "moral molecule", oxytocin may actually increase ethnic bias in society because it selectively enhances certain emotional bonds. In a subsequent study, De Dreu then addressed another aspect of the purported link between oxytocin and morality by testing the honesty of subjects. The study Oxytocin promotes group-serving dishonesty showed that oxytocin increased cheating in study subjects if they were under the impression that dishonesty would benefit their group. De Dreu concluded that oxytocin does make us less selfish and care more about the interest of the group we belong to.
These recent oxytocin studies not only question the "moral molecule" status of oxytocin but raise the even broader question of whether more empathy necessarily leads to increased moral behavior, independent of whether or not it is related to oxytocin. The researchers Jean Decety and Jason Cowell at the University of Chicago recently analyzed the scientific literature on the link between empathy and morality in their commentary Friends or Foes: Is Empathy Necessary for Moral Behavior?, and find that the relationship is far more complicated than one would surmise. Judges, police officers and doctors who exhibit great empathy by sharing in the emotional upheaval experienced by the oppressed, persecuted and severely ill always end up making the right moral choices – in Hollywood movies. But empathy in the real world is a multi-faceted phenomenon and we use this term loosely, as Decety and Cowell point out, without clarifying which aspect of empathy we are referring to.
Decety and Cowell distinguish at least three distinct aspects of empathy:
1. Emotional sharing, which refers to how one's emotions respond to the emotions of those around us. Empathy enables us to "feel" the pain of others and this phenomenon of emotional sharing is also commonly observed in non-human animals such as birds or mice.
2. Empathic concern, which describes how we care for the welfare of others. Whereas emotional sharing refers to how we experience the emotions of others, empathic concern motivates us to take actions that will improve their welfare. As with emotional sharing, empathic concern is not only present in humans but also conserved among many non-human species and likely constitutes a major evolutionary advantage.
3. Perspective taking, which - according to Decety and Cowell - is the ability to put oneself into the mind of another and thus imagine what they might be thinking or feeling. This is a more cognitive dimension of empathy and essential for our ability to interact with fellow human beings. Even if we cannot experience the pain of others, we may still be able to understand or envision how they might be feeling. One of the key features of psychopaths is their inability to experience the emotions of others. However, this does not necessarily mean that psychopaths are unable to cognitively imagine what others are thinking. Instead of labeling psychopaths as having no empathy, it is probably more appropriate to specifically characterize them as having a reduced capacity to share in the emotions while maintaining an intact capacity for perspective-taking.
In addition to the complexity of what we call "empathy", we need to also understand that empathy is usually directed towards specific individuals and groups. De Dreu's studies demonstrated that oxytocin can make us more pro-social as long as it benefits those who we feel belong to our group but not necessarily those outside of our group. The study Do you feel my pain? Racial group membership modulates empathic neural responses by Xu and colleagues at Peking University used fMRI brain imaging in Chinese and Caucasian study subjects and measured their neural responses to watching painful images. The study subjects were shown images of either a Chinese or a Caucasian face. In the control condition, the depicted image showed a face being poked with a cotton swab. In the pain condition, study subjects were shown a face of a person being poked with a needle attached to syringe. When the researchers measured the neural responses with the fMRI, they found significant activation in the anterior cingulate cortex (ACC) which is part of the neural pain circuit, both for pain we experience ourselves but also for empathic pain we experience when we see others in pain. The key finding in Xu's study was that ACC activation in response to seeing the painful image was much more profound when the study subject and the person shown in the painful image belonged to the same race.
As we realize that the neural circuits and hormones which form the biological basis of our empathy responses are so easily swayed by group membership then it becomes apparent why increased empathy does not necessarily result in behavior consistent with moral principles. In his essay "Against Empathy", the psychologist Paul Bloom also opposes the view that empathy should form the basis of morality and that we should unquestioningly elevate empathy to virtue for all:
"But we know that a high level of empathy does not make one a good person and that a low level does not make one a bad person. Being a good person likely is more related to distanced feelings of compassion and kindness, along with intelligence, self-control, and a sense of justice. Being a bad person has more to do with a lack of regard for others and an inability to control one's appetites."
I do not think that we can dismiss empathy as a factor in our moral decision-making. Bloom makes a good case for distanced compassion and kindness that does not arise from the more visceral emotion of empathy. But when we see fellow humans and animals in pain, then our initial biological responses are guided by empathy and anger, not the more abstract concept of distanced compassion. What we need is a better scientific and philosophical understanding of what empathy is. Empathic perspective-taking may be a far more robust and reliable guide for moral decision-making than empathic emotions. Current scientific studies on empathy often measure it as an aggregate measure without teasing out the various components of empathy. They also tend to underestimate that the relative contributions of the empathy components (emotion, concern, perspective-taking) can vary widely among cultures and age groups. We need to replace overly simplistic notions such as oxytocin = moral molecule or empathy = good with a more refined view of the complex morality-empathy relationship guided by rigorous science and philosophy.
De Dreu, C. K., Greer, L. L., Van Kleef, G. A., Shalvi, S., & Handgraaf, M. J. (2011). Oxytocin promotes human ethnocentrism. Proceedings of the National Academy of Sciences, 108(4), 1262-1266.
Decety, J., & Cowell, J. M. (2014). Friends or Foes: Is Empathy Necessary for Moral Behavior?. Perspectives on Psychological Science, 9(5), 525-537.
Shalvi, S., & De Dreu, C. K. (2014). Oxytocin promotes group-serving dishonesty. Proceedings of the National Academy of Sciences, 111(15), 5503-5507.
Xu, X., Zuo, X., Wang, X., & Han, S. (2009). Do you feel my pain? Racial group membership modulates empathic neural responses. The Journal of Neuroscience, 29(26), 8525-8529.
Vested in War
Once again, as has always been the case come fall; September and October, all these past fifteen years and more, the so called leaders of the world have resolved to continue to bomb and bomb and bomb in order to save humanity. That's all they've got going for themselves, bombs. In the name of terrorism.
The Pentagon and its collaborators mainly in Hollywood, Media, Politics, Development and so forth, as the marketing agents for weapons manufacturers, have succeeded in only one project: the branding of a religion and therefore over 2 billion people, as being something else. Enemy combatants. In the name of vests. A vest trumps, drones, bombs, cruise missiles, uranium depleted ammunition, white phosphorus bombings. A vest.
This branding conversation, this fall too, remains and is in fact dialing up to maximize war profits and keep the world on the edge of suicide. As migrants fled war, and continued to don life vests and braved the seas between Turkey and Western Europe trying to wash up alive onto the shores of Greece to then walk further northward to refuge and away from war, and the latest pilgrims to become victims of the Saudis, died in their hundreds during the Hajj in Mecca, another set of migrants and pilgrims made their annual pilgrimage to the Mecca for diplomacy to the headquarters of the UN in New York to agree on one thing—more war, more bombings. The solutions, put forth, no matter what, revolve around religion and bombings. The Holy Roman Empire, it seems is on the rise, still glittering in the vestments of old.
A few weeks ago as summer waned, and another crisis dawned, the Pope like a new product launched in September, donned his vestments of holiness and as if like the don of all such dons presided over a multi-faith ceremony at the monument for the World Trade Center in New York and earlier in Washington DC spoke to the Americans by speaking to the Congress. But never once during his speeches directed to the Americans did he use the word war or say ‘Stop war now.' He spoke of peace, spoke of conflict, spoke of poverty, of excess of wealth, spoke of charity, spoke about the climate, but not once the word war. He never uttered the word war in the context of today's wars. Oh right, I stand corrected he mentioned the Great War as in World War in quoting a sentence by a previous Pope. That's it. And please keep in mind that those wars aren't referred to as the Great Conflict One and the Great Conflict Two. They are called wars. That means something. That means something terrible. Words matter. They mean things. The world's most weapon wealthy and weapon powerful and weapon producing nation is at War. It is not at a conflict. It is at War. Conflict suggests, that those involved may be involved in resolving conflict, conflicted—they could be peacemakers. They are not. They are warmongers, warriors and they are involved in making War. They are vested, financial, economically, industrially, psychologically, emotionally, politically, religiously in war. They are vested in war.
Is it so impractical, too unreasonable and idealistic to ask that the word War be used for War? More unreasonable than believing in God? More unreasonable than believing in the Pope's agency? Believing in God, we have no problem with—believing that he speaks for God—no problem—having him address a secular institution such as the Congress-----no problem---but expecting him to say: stop war—that's a problem? Stop war? For him to use the word ‘War'. Because you see there are declarations and resolutions to go to war. Not conflict. There are colleges for War. And there are weapons for war. These are not labeled as colleges, weapons and soldiers for conflict. To expect the Pope to say the word War and ask him to say Stop War might be like expecting the Pope and other religious headmen to accept women as equal to men. Absurd.
The New York Times carried a front page article on Sunday September 27, 2015 with an expose on ‘A world of smugglers, inner tubes and remote launch sites'. You would think that this was about the war industry and the sale of weapons. Or drugs. No, it is not. It is about the brisk sale of life vests at US$13 a piece and, the industry of trafficking and transporting refugees fleeing war in Syria, Iraq and Afghanistan from the shores of Turkey to Greece. (here) The New York Times and others like it have carried many an article these past fifteen years featuring suicide vests. It has carried countless articles justifying war and yes—making the case for weapons of mass destruction that never were (here). And it has gotten away with it as have all the warmongers. So of course it does not carry a headline or a front page article on the trillion dollar industry of weapons which has and continues to use the image of suicide belts---for the remote launch sites for drone attacks, private armies, carpet bombings and endless war and occupation.
In the name of suicide vests trillions of dollars' worth of a war industry. An image of a suicide vest is used over and over again to justify carpet bombings of villages and towns, drone attacks, the use of phosphorous and uranium laced ammunition, hundreds of thousands, over a million people killed in the wars in Iraq, Syria and Afghanistan, the growth of military bases, the rise of private military corporations, militaries stacked with volunteer soldiers going on their third, fourth and fifth rounds of tours, drone attacks, surveillance—occupation and invasion and walling in of whole peoples-----all in the name of one image---suicide vests.
Life vests—bought and used by desperate, hapless, helpless innocent people—a whole people who have been characterized for years and years by the war machinery as potential suicide bombers wearing suicide vests. And what is it that they do wear? What is it that they have been forced to wear? Life vests is what they put on to escape death and destruction of carpet bombings in their homelands---life vests to stay alive and afloat to take their chances, risk drowning to get away from the bombings and wars in their homelands to places that are safe---to the homelands of the war manufacturers and the sources of aerial bombings that won't stop. Aerial bombings—the manufacture of which remains unquestioned but the pilots of the planes who drop these bombs, and who are lionized by the New York Times with artful photographs. They are the real terrorists and murderers using sophisticated bullets and guns, those killers lionized by Hollywood. The Generals and the CIA and Pentagon and State Department officials who order the kills and who started this endless war, treated as heroes with their endless statements in newspapers and then honored later with professorships at leading Universities. The bombings that they have ordered all in the name of one image of endless destruction, the image of a suicide vest. All of these careers and fortunes are made --launched on this image of a suicide vest. Nary a story on that. No, instead it is the industry of life vests that makes headlines in the papers of the West. Not the industry of weapons---and killing machines that are causing genocide.
In a world on the edge of suicide of decreasing water, increasing drought, war, millions displaced, the powers that be still focus on lies to justify war. I watched on TV, in admiration, the theater, the spectacle, a production fit for a Broadway show in lower Manhattan, just off Broadway. The Pople holding court clad in vestments of power,—as all the world religions paid homage to him the presiding Pope sitting as if on a throne, himself with a murky history related to Operation Candor of US's dirty wars in South America. I watched this Pope and religion and Empire getting a makeover. This Pope, remade into charity and humanity itself, sitting on a throne at a temple on the ruins of the World Trade Center; that ground zero, in whose name a large swathe of the world has become a myriad of ground zeroes, with hundreds and thousands dead, all justified on the basis of an image of a suicide vest, an image that has made trillions of dollars for the war machine. And the chamber music that accompanied Pope Francis as he exited from this theater reminded me of an age passed—still present, renewed of Holy wars of a thousand years ago extended forward a thousand years.
And so the New York Times writes a front page article covering the industry of life vests for refugees,as though it were sordid, instead of focusing on the industry of war, just at the moment that a life vest is thrown once again to war and religion and the manufacturing of what is good and what is evil. And yes a lifeline thrown to fascism fifteen years ago by the branding of one religion to make the case for war—is now beginning to shape up as the monster that it is. The true Monster that it is. And the war crimes of bombing people continue because those who are bombed are after all enemy combatants, everyone killed by the bombs manufactured by the weapons industry are enemy combatants on the justifications of vests, and vestments, while these vested interests keep the world on the edge of suicide.