Selected Minor Works: Where Movies Came From

Justin E. H. Smith

In The World Viewed, Stanley Cavell wryly comments that it was not until he reached adulthood that he learned “where movies come from.” As it happens, movies come from the same place I do: California. Now as an answer to the question of origins, this is hardly satisfying. “California,” as a one-word answer to anything, has the air of a joke about it, whereas we at least aim for earnestness. This is a problem that has vexed many who have left California and attempted to make sense of it at a distance. The turn-of-the-century Harvard philosopher Josiah Royce once declared of his home state that “there is no philosophy in California.” Yet the state’s generative power, and my attachment to it, have left me with the sense that something of philosophical interest is waiting to be said, by me if I’m lucky, if not in it, then at least about it and its exports.

My sense is that these two questions, the autobiographical and the film-historical, may be treated together. This is not because I was born into a Hollywood dynasty –far from it– but because throughout most of my life, memories were something shared, something public, something manufactured. By this I mean that, instead of memories, we had movies, and instead of conversation, we mimicked dialogue. I use the past tense here, as in the title (though there in acknowledgment also of a debt to Joan Didion), because it is already clear that movies will not be the dominant art form of the twenty-first century, and if we agree with Cavell that a movie is a sequence of automated world projections, then movies are no longer being made.

Gretagarboclarkgable

A contingent development in the history of technology left us with an art form thought by many to reveal something very significant about what we as humans are. Cavell chose to express this significance in the Heideggerian terms of film’s ‘world-disclosing power’ (did Heidegger ever even see a movie?). Already before 1920, Royce’s Harvard colleague Hugo Münsterberg had argued that the ‘photoplay’ serves as a powerful proof of Fichtean idealism: what need is there for Kant’s thing-in-itself if a ‘world’ can exist just as well projected on a screen as embodied in three dimensions?

I take it for granted that the world disclosed to us today is the same world to which human beings have had access for roughly the past hundred thousand years, that is, since we became anatomically, and thus we may presume cognitively, modern. For this reason, what interests me most about movies is the question: what is it that our experience of them replaced? We have only had them for a hundred and some odd years, not long enough for our brains to have evolved from some pre-cinematic condition into something that may be said to have an a priori grasp of what a movie is, in the same way that we now know that human brains come into the world with the concept of, for example, ‘animate being’. We are not naturally movie-viewing creatures, though it certainly feels natural, as though it were just what we’ve always done. What then is it that we’ve always done, of which movie-viewing is just the latest transformation? What is that more fundamental category of activity of which movie-viewing is a variety?

One well-known answer is that watching movies is an activity much like dreaming. This is evidenced by the numerous euphemisms we use for the motion picture industry. In his recent book, The Power of Movies: How Mind and Screen Interact, the analytic philosopher Colin McGinn explicitly maintains that the mind processes cinematic stories in a way that is similar to its processing of dreams. He even suggests that movies are ‘better’ than dreams to the extent that they are ‘dreams rendered into art’.

But what then are dreams? To begin with, dreams are a reminder that every story we come up with to account for who we are and how we got to be that way is utterly and laughably false. Everything I tell myself, every comforting phrase so useful in waking life, breaks down and becomes a lie. For eight hours a day, it is true that I have killed someone and feel infinite remorse, that my teeth have fallen out, that I am able to fly but ashamed to let anyone know, that the airplanes I am in make slow motion, 360-degree loops, that my hair is neck-length and won’t grow any longer. None of these things is true. Yet, some mornings, for a few seconds after awakening, I grasp that they are truer than true. And then they fade, and the ordinary sense of true and false settles back in.

The images that accompany these feelings –the feeling of shame at levitating, the feeling of being in a doomed airplane—are relatively unimportant. They are afterimages, congealed out of the feelings that make the dreams what they are. As Aristotle already understood, and explained in his short treatise On Dreams, “in every case an appearance presents itself, but what appears does not in every case seem real… [D]ifferent men are subject to illusions, each according to the different emotion present in him.” Perhaps because of this feature of dreams –that they are not about the things that are seen, but rather the things that are seen are accompaniments for feelings– dreams have always been interpreted symbolically. This has been the case whether the interpreter believes that dreams foretell the future, or in contrast that they help to make sense of how the past shaped the present. Psychoanalysis has brought us around, moreover, to the idea that retrodiction is no more simple a task than oneiromancy, and that indeed the two are not so different: once you unravel the deep truth of the distant past, still echoed in dreams even if our social identities have succeeded in masking it, then by that very insight, and by it alone, you become master of your own future.

It seems to me that we don’t have an adequate way of talking about dreams. The topic is highly tabooed, and anyone who recounts his dreams to others, save for those who are most intimate, is seen as flighty and mystical. Of course, the consequence of this taboo is not that dreams are not discussed, but only that they are discussed imprecisely. For the most part, we are able to explain what happened, but not what the point-of-view of the dreamer was. This is overlooked, I suspect, because it is taken for granted that the point-of-view of the dreamer is that of a movie viewer. What people generally offer when prompted to recount a dream is a sort of plot summary: this happened, then this, then this. Naturally, the plot never makes any sense at all, and so the summary leaves one with the impression that what we are dealing with is a particularly strange film.

Certainly, there is a connection between some films –especially the ‘weird’ ones– and dreams, but only because the filmmakers have consciously, and in my view always unsuccessfully, set about capturing the feeling of a dream. From Un chien andalou to Eraserhead, weird things happen indeed, but the spectator remains a spectator, outside of the world projected onto the screen, looking into it. We are made to believe that our dreams are ‘like’ movies, but lacking plots, and then whenever an ‘experimental’ filmmaker attempts to go without plot, as if on cue audiences and critics announce that the film is like a dream. Middle-brow, post-literate fare such as Darren Aronofsky’s tedious self-indulgences have further reduced the dreamlike effect supposedly conveyed by non-linear cinema to an echo of that adolescent ‘whoah’ some of us remember feeling at the Pink Floyd laser-light show down at the planetarium.

Dreams are not weird movies, even if we recognize the conventions of dreamlikeness in weird movies. Weird movies, for one thing, are watched. The dreamer, in contrast, could not be more in the world dreamt. It is the dreamer’s world. It is not a show.

However problematic the term, cinematic ‘realism’ shows us, moreover, that movies can exhibit different degrees of dreamlikeness, and thus surely that there is something wrong with the generalized movie-dream analogy. In dream sequences, we see bright colors and mist, and, as was explicitly noted by a dwarf in Living in Oblivion, we often see dwarves. When the dream sequence is over, the freaks disappear, the lighting returns to normal, and in some early color films, most notably The Wizard of Oz, we return to black-and-white, the cinematic signifier of ‘reality’. My dreams are neither like the dream sequences in movies, nor are they like the movies that contain the dream sequences. Neither Kansas nor Oz, nor limited to dwarves in the repertoire of curious sights they offer up.

A much more promising approach is to hold, with Cavell, that movies are mythological, that their characters are types rather than individuals, and that the way we experience them is probably much more like the way folk experience their tales. Movies are more like bedtime stories than dreams: like what we cognize right before going to sleep than the mash that is made of our waking cognitions after we fall asleep.

If anything on the screen resembles dreams, it is cartoons (and thus Cavell is right to insist that these are in need of a very different sort of analysis than automated world projections). Cartoons are for the most part animistic. It is difficult to imagine a dream sequence in a Warner Brothers cartoon, since there were to begin with no regular laws of nature that might be reversed, there was no reality that might be suspended. For most of the early history of cartoons, there were no humans, but only ‘animate’ beings, such as cats and mice, as well as trees, the sun, and clouds, often given a perfunctory face just to clue us into their ontological status.

The increasing cartoonishness of movies –both the increasing reliance on computer graphics, as well as the decreasing interest in anything resembling human beings depicted in anything resembling human situations (see, e.g., Pierce Brosnan-era James Bond for a particularly extreme example of the collapse of the film/cartoon boundary)—may be cause for concern. Mythology, and its engagement with recognizably human concerns about life and death, is, it would seem, quickly being replaced by sequences of pleasing colors and amusing sounds.

Teletubbieshp43212

I do not mean to come across as a fogey. Unlike Adorno with his jazz problem (which is inseparable from his California problem: the state that made him regret that the Enlightenment ever took place), I am a big fan of some of the animistic infantilism I have seen on digital screens recently. Shrek and the Teletubbies are fine entertainments. I am simply noting, already for a second time, that the era of movies is waning, and that nothing has stepped in, for the moment, to do what they once did.

A video-game designer recently told me that ‘gaming’ is just waiting for its own Cahiers du Cinéma, and that when these come along, and games are treated with adequate theoretical sophistication not by fans but by thinkers, then these will be in a position to move into the void left by film. I have no principled reasons to be saddened by this, but they will have to do a good deal more than I’ve seen them doing so far. Now I have not played a video game since the days when Atari jackets were sincerely, and not ironically, sought after. But I did see some Nintendo Wii consoles on display in a mall in California when I was home for the holidays this past week. The best argument for what the crowding mall urchins were doing with those machines is the same one, and the only one, that we have been able to come up with since Pong, and the one I certainly deployed when pleading with my own parents for another few minutes in front of the screen: it seems to do something for developing motor skills. This makes video games the descendants of sporting and hunting, while what movies moved in to replace were the narrative folk arts, such as the preliterate recitations that would later be recorded as Homer’s Odyssey. These are two very different pedigrees indeed, and it seems unlikely to me that the one might ever be the successor to the other.

Dreams are the processing of emotional experiences had in life, experiences of such things as hunting, or fighting, or love. Narrative arts, such as movies, are the communal processing, during waking life, of these same experiences. Movies are not like dreams, and video games are not like movies. And as for what experiences are, and why all the authentic ones seem to have already been had by the time we arrive at an age that enables us to reflect on them (seem all to have happened in California), I will leave that question to a better philosopher, and a less nostalgic one.

**

For an extensive archive of Justin Smith’s writing, please visit his archive at www.jehsmith.com.



Sunday, December 31, 2006

The Lives They Lived

From The New York Times:Times_cover

This issue doesn’t try to be a definitive document of the lives and deaths of the most important or influential. Instead, it’s largely an idiosyncratic selection, chosen by our editors and writers, who are often following their own passions and curiosities. There are some big names: the playwright Wendy Wasserstein, the photographer Gordon Parks, Betty Friedan. But there are also many minor characters — Victoria Jackson, Gray Adams, who was involved in desegregating the Mississippi delegation of the Democratic Party; Rupert Pole, the other husband of Anaïs Nin; Nena O’Neill: co-author of a 1970s best seller about “open marriage.” By embracing its own form of obituary, this issue tries to capture ideas and moments across the century and also to convey the richness of individual lives.

More here.

Top 10 stories of 2006

From Nature:

The stories that got the most comments from you, our readers.

Ten_2 The fish that crawled out of the water
Does gender matter?
Islam and science
Found: one Earth-like planet
Delusions of faith as a science
Top 5 science blogs
‘Tenth Planet’ found to be a whopper
Study challenges prayers for the sick
Tragic drug trial spotlights potent molecule
The space elevator: going down?

More here.

Saturday, December 30, 2006

inlandia

Inlandia

But Didion and Davis are only tourists in the “empire” of inland California, with a tourist’s ability to be both accurate and oblivious when they write about what it’s like to actually live in San Bernardino, Riverside or the badlands beyond. The road through “Inlandia” (a somewhat awkward designation for the Southern California interior) stops at other accounts of home. M.F.K. Fisher remembers Hemet in the 1940s: “There are many pockets of comfort and healing on this planet … but only once have I been able to stay as long and learn and be told as much as there on the southeast edge of the Hemet Valley.” J. Smeaton Chase wakes to a July dawn in the Mojave, circa 1920: “To lie at dawn and watch the growing glory in the east, the pure … light stealing up from below the horizon, the brightening to holy silver, the first flash of amber, then of rose, then a hot stain of crimson, and then the flash and glitter, the intolerable splendor….” Percival Everett in 2003 defines the “badlands” of the 909 area code: “Technically, the Badlands is chaparral. The hills are filled with sage, wild mustard, fiddleheads and live oaks. Bobcats, meadowlarks, geckos, horned lizards, red tailed hawks, kestrels, coach whip snakes, king snakes, gopher snakes. Rattlesnakes and coyotes. We don’t see rain for seven months of the year and when we do we often flood. In the spring, the hills are green. They are layered and gorgeous. This is in contrast to the rest of the year when the hills are brown and ochre and layered and gorgeous.”

more from the LA here.

hilton kramer: against academic twaddle, commercial hype or political mystification

Kram190

Kramer’s most provocative judgment is to insist upon Modernism as an essential component of bourgeois culture. He admires Modernist art and has less patience for the artworks made “after” Modernism, which he tends to interpret in terms of decline or degeneration. Contemplating Matisse’s achievement, Kramer laments, “It is hard to believe that we shall ever again witness anything like it, now or in the foreseeable future.” Today, instead, we endure “the nihilist imperatives of the postmodernist scam.”

Not that Kramer hates everything that came after Matisse. Many of the items in the book, though slight and descriptive, perform a modest, useful function for newcomers to subjects including Jackson Pollock (“a triumph of ambition and short-lived inspiration over a severely handicapped and unruly personality”), Helen Frankenthaler (“a major artist”), Odd Nerdrum (“a first-rate dramatic imagination”) and Alex Katz (“one thinks of Monet at Giverny”). He also discusses Richard Diebenkorn and Christopher Wilmarth, not to mention past masters like Courbet, Bonnard, Braque and Beckmann. In all, an eclectic group, and Kramer writes interestingly and engagingly about each one.

more from the NY Times Book Review here.

Hitchens on Ford

In Slate:

One expects a certain amount of piety and hypocrisy when retired statesmen give up the ghost, but this doesn’t excuse the astonishing number of omissions and misstatements that have characterized the sickly national farewell to Gerald Ford. One could graze for hours on the great slopes of the massive obituaries and never guess that during his mercifully brief occupation of the White House, this president had:

1. Disgraced the United States in Iraq and inaugurated a long period of calamitous misjudgment of that country.

2. Colluded with the Indonesian dictatorship in a gross violation of international law that led to a near-genocide in East Timor.

3. Delivered a resounding snub to Aleksandr Solzhenitsyn at the time when the Soviet dissident movement was in the greatest need of solidarity.

Instead, there was endless talk about “healing,” and of the “courage” that it had taken for Ford to excuse his former boss from the consequences of his law-breaking. You may choose, if you wish, to parrot the line that Watergate was a “long national nightmare,” but some of us found it rather exhilarating to see a criminal president successfully investigated and exposed and discredited. And we do not think it in the least bit nightmarish that the Constitution says that such a man is not above the law. Ford’s ignominious pardon of this felonious thug meant, first, that only the lesser fry had to go to jail. It meant, second, that we still do not even know why the burglars were originally sent into the offices of the Democratic National Committee. In this respect, the famous pardon is not unlike the Warren Commission: another establishment exercise in damage control and pseudo-reassurance (of which Ford was also a member) that actually raised more questions than it answered. The fact is that serious trials and fearless investigations often are the cause of great division, and rightly so. But by the standards of “healing” celebrated this week, one could argue that O.J. Simpson should have been spared indictment lest the vexing questions of race be unleashed to trouble us again, or that the Tower Commission did us all a favor by trying to bury the implications of the Iran-Contra scandal. Fine, if you don’t mind living in a banana republic.

Why military honor matters

Elaine Scarry in the Boston Review:

ScarryIn 1998, an article by Colonel Charles J. Dunlap Jr. appeared in the United States Air Force Academy’s Journal of Legal Studies warning that a new form of warfare lay ahead. Because our military resources are so far beyond those of any other country, Dunlap argued, no society can today meet us through symmetrical warfare. Therefore, our 21st-century opponents will stop confronting us with weapons and rules that are the mirror counterparts of our own. They will instead use asymmetrical or “neo-absolutist” forms of warfare, resorting to unconventional weapons and to procedures forbidden by international laws.

What Dunlap meant by “unconventional weapons” is clear: the category would include not only outlawed biological, chemical, and nuclear weapons (the last of which, in the view of the United States, only itself and a small number of other countries are legally permitted to have) but also unexpected weapons such as civilian passenger planes loaded with fuel and flown into towering buildings in densely populated cities.

But the term “neo-absolutism,” as used by Dunlap, applies not just to the use of unconventional weapons but to conduct that violates a sacrosanct set of rules—acts that are categorically prohibited by international law and by the regulations of the United States Air Force, Navy, and Army (along with the military forces of many other nations). For example, though warfare permits many forms of ruse and deception, it never permits the false use of a white flag of truce or a red cross.

More here.

BEWARE THE ONLINE COLLECTIVE

Jaron Lanier at Edge.org:

Jaron201_1It’s funny being an “old timer” in the world of the Internet. About six years ago, when I was 40 years old, a Stanford freshman said to me, “Wow Jaron Lanier—you’re still alive?” If there’s any use to sticking around for the long haul — as computers get so much more powerful that every couple of years our assumptions about what they can do have to be replaced — it might be in noticing patterns or principles that may not be so apparent to the latest hundred million kids who have just arrived online.

There’s one observation of mine, about a potential danger, that has caused quite a ruckus in the last half-year. I wrote about it initially in an essay called “Digital Maoism.”

Here’s the idea in a nutshell: Let’s start with an observation about the whole of human history, predating computers. People have often been willing to give up personal identity and join into a collective. Historically, that propensity has usually been very bad news. Collectives tend to be mean, to designate official enemies, to be violent, and to discourage creative, rigorous thought. Fascists, communists, religious cults, criminal “families” — there has been no end to the varieties of human collectives, but it seems to me that these examples have quite a lot in common. I wonder if some aspect of human nature evolved in the context of competing packs. We might be genetically wired to be vulnerable to the lure of the mob.

More here.

Turbulent year for books

Josh Getlin in the Los Angeles Times:

It started off with bestselling author James Frey admitting his memoir, “A Million Little Pieces,” was in fact a work of fiction, and ended with celebrity publisher Judith Regan getting fired for allegedly making anti-Semitic comments after her proposed O.J. Simpson confessional book-TV deal got shot down.

In between came charges that 19-year-old Harvard novelist Kaavya Viswanathan had lifted passages from a rival chick-lit author, and hotly disputed allegations that Ian McEwan, one of the most respected names in modern literary fiction, may have been guilty of plagiarism.

More here.

Iminngernaveersaartunngor- tussaavunga

John McWhorter in the New York Sun:

In the rush of the holiday season you may have missed that a white buffalo was born at a small zoo in Pennsylvania. Only one in 10 million buffalo is born white, and local Native Americans gave him a name in the Lenape language: kenahkihinen, which means “watch over us.”

They found that in a book, however. No one has actually spoken Lenape for a very long time. It was once the language of what is now known as the tristate area, but its speakers gradually switched to English, as happened to the vast majority of the hundreds of languages Native Americans once spoke in North America.

The death of languages is typically described in a rueful tone. There are a number of books treating the death of languages as a crisis equal to endangered species and global warming. However, I’m not sure it’s the crisis we are taught that it is.

There is a part of me, as a linguist, that does see something sad in the death of so many languages. It is happening faster than ever: It has been said that a hundred years from now 90% of the current 6,000 languages will be gone.

Each extinction means that a fascinating way of putting words together is no longer alive. In, for example, Inuktitut Eskimo, which, by the way, is not dying, “I should try not to become an alcoholic” is one word: Iminngernaveersaartunngortussaavunga.

More here.

Some Kurdish Reactions to to Saddam’s Execution

From some initial editorials, many Kurds seem none too pleased with Saddam’s execution. Amin Matin in Kurdish Media:

Iraq’s highest court upheld Saddam Hussein’s death sentence for killing of nearly 150 Shiite Arabs, paving the way for the former dictator to be hanged within 30 days. The execution order still needs to be approved by the office of Iraqi president, Mr. Jalal Talabani.

Saddam is also on trial for crimes against humanity and genocide that he and his regime committed in Southern Kurdistan. These atrocities resulted in killing of over 200,000 civilian Kurds and were part of a final solution code named Anfal that also included use of weapons of mass destruction such as chemical bombs. Executing Saddam prior to concluding the current trial will deny justice to Kurdish victims and strips Kurds from the possibility of serving justice to Anfal survivors. Proving the case for Kurdish genocide has enormous values for Iraqi Kurds and Kurdish nation. There are still people in Iraq and Arab world that deny the systematic genocide against Kurds. Saddam’s trial for crimes against humanity and genocide committed against Kurdish nation is a rare opportunity for Kurds to validate the depth and scope of the atrocities committed against Kurdish nation in a Iraqi court of law.

Some responses by Kurdish Media readers can be found here.

Joy of Capture Muted at the End

From Saddam_promo_1The New York Times:

CRAWFORD, Tex., Dec. 29 — The capture of Saddam Hussein three years ago was a jubilant moment for the White House, hailed by President Bush in a televised address from the Cabinet Room. The execution of Mr. Hussein, though, seemed hardly to inspire the same sentiment.

Before the hanging was carried out in Baghdad, Mr. Bush went to sleep here at his ranch and was not roused when the news came. In a statement written in advance, the president said the execution would not end the violence in Iraq.

After Mr. Hussein was arrested Dec. 13, 2003, he gradually faded from view, save for his courtroom outbursts and writings from prison. The growing chaos and violence in Iraq has steadily overshadowed the torturous rule of Mr. Hussein, who for more than two decades held a unique place in the politics and psyche of the United States, a symbol of the manifestation of evil in the Middle East.

Now, what could have been a triumphal bookend to the American invasion of Iraq has instead been dampened by the grim reality of conditions on the ground there.

More here.

Long Walk to Freedom

From The Washington Post:

Mandela “A leader is like a shepherd,” Nelson Mandela proclaimed more than a decade ago in his autobiography. “There are times when a leader must move out ahead of his flock, go off in a new direction, confident that he is leading his people in the right way.”

It’s an arrogant statement — could any other democratically elected politician get away with equating his constituents with sheep? — and yet supremely apt. For Mandela is arguably the greatest political leader of our time, the one person worthy of mention alongside FDR, Churchill and Gandhi. Mandela led the political and moral crusade for majority rule in South Africa against a white supremacist police state, risking his life, surrendering his personal freedom and his family’s well-being. He spent 27 years in prison only to emerge as a wise, dynamic and conciliatory figure binding black and white together as father of his nation and inspiration for the world.

The danger, of course, is that in extolling Mandela’s virtues, it’s all too easy to turn him into a saint — worshipped and untouchable and therefore of no practical value as a guide for our own behavior — and to lose track of the flawed, flesh-and-blood human being whom we can learn from and seek to emulate. As George Orwell once warned, “Saints should always be judged guilty until they are proved innocent.”

More here.

Friday, December 29, 2006

Genghis Khan: Law and order

Jack Weatherford in the Los Angeles Times:

Screenhunter_01_dec_29_1834Genghis Khan recognized that victory came by conquering people, not land or cities. In contrast to the Americans in 2003, who sought to take the largest cities first in a campaign of shock and awe, the Mongols in 1258 took the smallest settlements first, gradually working toward the capital. Both the Mongols and the Americans used heavy bombardment to topple Baghdad, but whereas the Americans rushed into the capital in a triumphant victory celebration, the Mongols wisely decided not to enter the defeated — but still dangerous — city. They ordered the residents to evacuate, and then they sent in Christian and Muslim allies, who seethed with a variety of resentments against the caliph, to expunge any pockets of resistance and secure the capital. The Americans ended up as occupiers; the Mongols pulled strings, watching from camps in the countryside.

The Mongols also immediately executed the caliph and his sons on charges that they spent too much money on their palaces and not enough defending their nation. They killed most members of the court and administration. The Mongols took no prisoners and allowed no torture, but they executed swiftly and efficiently, including the soldiers of the defeated army who, they believed, would be a constant source of future problems if allowed to live. The first several months of a Mongol invasion were bloody, but once the takeover ended, the bloodshed ended.

By contrast, the American military campaign was quick, with comparatively few Iraqi (or coalition) casualties, but the bloodshed has continued for years. Constrained from decisively dispatching enemies of a new Iraq, the United States has allowed Iraqi terrorists to select who lives and who dies, including women and children, in a slow-motion massacre.

More here.

Jolly Old London, but Definitely Not Prim and Proper

From The New York Times:

Laughter may be universal, but what provokes it is not. Even within a culture, humor can change drastically over a relatively short period. This truth is abundantly documented in “City of Laughter,” Vic Gatrell’s study of comic prints produced in London during the late 18th and early 19th centuries, a period he deems the golden age of satire.

The humor on display in the prints of James Gillray, Thomas Rowlandson, and George Cruikshank — the big three in Mr. Gatrell’s pantheon — was often coarse, bawdy, scatological and obscene. Private parts were on graphic display. Chamber pots and their contents stood front and center. Prostitutes cavorted with princes. Everything that the readers of Jane Austen regarded as private or shameful was shown in living color, on large, beautifully printed sheets hung in the windows of dealers for all London to see, and to laugh at.

More here.

abstractions

Rothko

Probably the central dispute about abstract art in the 20th century hinged on the ostensible spiritual content or impact of the work. Some, like Barnett Newman, insisted that his paintings were “religious art which through symbols will catch the basic truth of life.” Others were profoundly superficial materialists like Frank Stella, who famously opined, “What you see is what you see.” While I have never found Newman’s paintings very convincing arguments, the same cannot be said for the work of Mark Rothko, whose shimmering veils of color can — under the right conditions — produce something resembling an out-of-body experience.

more from the LA Weekly here.

Money, like virtue, is as it does.

Art20basel20miami1

Mutual intoxications of art and money come and go. I’ve witnessed two previous booms and their respective busts: the Pop nineteen-sixties, which collapsed in the long recession of the seventies, and the neo-expressionist eighties, whose prosperity plummeted, anvil fashion, in 1989. In each instance, overnight sensations foundered and a generation of aspiring tyros was more or less extirpated. (They were out of style before the market revived.) But tough economic times nudge artists into ad-hoc communities and foster what-the-hell experimentation. The seventies gave rise to gritty conceptual maneuvers, supported by government and foundation grants, nonprofit institutions, and a few heroically, or masochistically, committed collectors. The nineties were dominated by festivalism: theatrical, often politically attitudin-izing installations that were made to order for a spreading circuit of international shows and contemporary museums and Kunsthallen. I disliked the nineties. I knew what all the righteously posturing art was for, but not whom it was for. It invoked a mythical audience, whose supposed assumptions were supposedly challenged. I missed the erotic clarity of commerce—I give you this, you give me that—and was glad when creative spunk started leeching back into unashamedly pleasurable forms. Then came this art-industrial frenzy, which turns mere art lovers into gawking street urchins. Drat.

more from The New Yorker here.

Tillim wins first Gardner Fellowship

From The Harvard Gazette:

Gardner1450_1 As a young photojournalist in South Africa in the 1980s, Guy Tillim found that photography could be a way of bridging the racial gap that apartheid had imposed on his society. “A camera was the perfect tool to cross those boundaries, to see what was going on in my own country.” Working for both local and foreign media, Tillim produced a powerful body of work and won a number of important awards for his documentation of social conflict and inequality in the countries of Africa. He has exhibited his photos in more than a dozen countries and has published in numerous volumes and journals.

Tillim’s powerful images and his commitment to using photography as a way of exploring the human condition so impressed members of a search committee representing the Peabody Museum of Archaeology and Ethnology that they chose him as the first recipient of the Robert Gardner Fellowship in Photography.

More here.

Attuned to chemistry of a genius

Eric Berger in the Houston Chronicle (via Accidental Blogger):

311xinlinegallery_1A starter violin costs about $200. A finely crafted modern instrument can run as much as $20,000. But even that’s loose change when compared with a violin made three centuries ago by Antonio Stradivari.

His 600 or so surviving violins can cost upward of $3.5 million.

For more than a century, artists, craftsmen and scientists have sought the secret to the prized instruments’ distinct sound. Dozens have claimed to have solved the mystery, but none has been proved right.

Now, a Texas biochemist, Joseph Nagyvary [in photo above], says he has scientific proof the long-sought secret is chemistry, not craftsmanship. Specifically, he says, Stradivari treated his violins with chemicals to protect them from wood-eating worms common in northern Italy. Unknowingly, Nagyvary says, the master craftsman gave his violins a chemical noise filter that provided a unique, pleasing sound.

More here.  [Thanks to Ruchira Paul.]

Bill Gates: A Robot in Every Home

William Henry Gates in Scientific American:

Screenhunter_01_dec_29_0119Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialized devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when–or even if–this industry will achieve critical mass. If it does, though, it may well change the world.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for.

But what I really have in mind is something much more contemporary: the emergence of the robotics industry, which is developing in much the same way that the computer business did 30 years ago.

More here.