Tuesday, November 24, 2015
On the eve of the 100th anniversary of the publication of General Relativity by Albert Einstein, Sean Carroll asks, "Einstein's legacy: if spacetime is dynamical rather than absolute, what else about the universe might be flexible?" at PBS Newshour:
Nicolaus Copernicus is famous for having suggested that the Earth moves around the sun, rather than the other way around. That’s a big deal, as it displaces the Earth from its presumed position at the center of the universe. But it’s easy for us to forget something equally amazing: the idea that the Earth can actually move at all. If anything seems like a solid foundation, it’s the Earth itself. But in our post-Copernican world, we know better.
Albert Einstein, with his general theory of relativity, took this conceptual revolution one step forward. Not only is the Earth not a fixed fulcrum around which the rest of the universe revolves, space and time themselves are not fixed and unchanging. In Einstein’s universe, space and time are absorbed into a single, four-dimensional “spacetime,” and spacetime is not solid. It twists and turns and bends in response to the motion of matter and energy. We perceive that stretching and distortion of the fabric of spacetime as the force of gravity.
The idea that space and time themselves are not immutable, but are dynamical quantities that can evolve through the history of the universe, is one of Einstein’s most dramatic legacies. It was so profound that Einstein himself had trouble accepting all the implications of the idea. When he investigated the universe as a whole in general relativity, he found that it should be expanding or contracting, not staying at a fixed size. That went contrary to his intuition, as well as to what astronomers of the time actually thought the universe was doing. When Edwin Hubble discovered the expansion of the universe in the 1920’s, Einstein realized that he had missed the opportunity to make one of the great predictions in the history of science.
The longer I looked at the painting the more I was drawn to the dialogue taking place, sotto voce, between the over-coloured count and his shadowy double hung high on the wall above him – a faintly preposterous rehash of the mirror in Las Meninas, where king and queen make their necessary appearance. The dialogue in Goya – the shadow play, the hovering between repetition and caricature – seemed to me to drain both parties (I presumed that the figure on the wall was an ancestor, or maybe the monarch himself) of reality. The second man staring at me – again, a version of a great moment in Las Meninas, where Velázquez in the background fixes his royal sitters with a predatory gaze – seemed to peer from the picture with an expression compounded of alarm, disbelief and sheer uncomfortable consciousness of his place in a game of looking. Looking and being looked at and thereby ‘brought to life’. He’d be damned if he’d occupy the place he’d been allotted. I found myself staring back at the painting in much the same frame of mind. The more I responded to Floridablanca’s local (stunning) reality effects – the silver shimmer on the count’s sash, the light through the glass on the clock face, the spectacles clutched in his fingers, the Zurbarán notebook glowing on the floor – the more it seemed to me they didn’t matter. What mattered – what made the painting Goya’s – was the pervasive unreality of the set-up, swallowing the world of objects and persons no sooner than it conjured them up.
I realise that I haven’t put my finger on what produced the feeling of unreality, and I’m not sure I can. I know there are dangers in trusting the feeling at all. Anyone looking at Goya’s portraits can’t avoid seeing them against the background of the Caprichos and Black Paintings and the unbearable private albums, some drawn, some etched and aquatinted, dwelling on torment and degeneracy.
Fourteen years after September 11, the reality-concealing rhetoric of Westernism participates in a race to extremes with its ideological twin, in an escalated dialectic of bombing from the air and slaughter on the ground. It grows more aggressive in proportion to the spread of the non-West’s chaos to the West, and also blends faster into a white supremacist hatred of immigrants, refugees, and Muslims (and, often, those who just “look” Muslim). Even more menacingly, it postpones the moment of self-reckoning and course-correction among Euro-American elites who seem to have led us, a century after the First World War, into another uncontrollable and extensive conflagration.
Among the more polished examples of their intellectual rearguardism last week was a piece in the Financial Times by the paper’s foreign-affairs columnist, Philip Stephens, titled “Paris attacks must shake Europe’s complacency. The idea that the west should shoulder blame rests on a corrosive moral relativism.”
It should be said that the Financial Times, the preferred newspaper of the Anglo-American intelligentsia as well as Davos Man and his epigones, keeps a fastidious distance, editorially, from the foam-at-the-mouth bellicosity of its direct competitor, the Wall Street Journal (whose op-ed pages often seem to be elaborating on its owner’s demented tweets).
Among the dozen useful masterpieces chosen by the US Postal Service for its 2011 series of stamps honoring American industrial design pioneers is the Normandie water pitcher (1935), a sleek chromium-plated vessel whose prow-like form echoes that of the then-new French ocean liner for which it was named. This stunning work, manufactured by the Revere Copper and Brass company, evokes the glamour of interwar transatlantic travel with a sculptural purity worthy of Brancusi, and is rightly included in numerous design history books and museum collections. Yet until now the general public has known next to nothing about its creator, Peter Muller-Munk.
That lapse is finally redressed by “Silver to Steel: The Modern Designs of Peter Muller-Munk,” an illuminating exhibition at Pittsburgh’s Carnegie Museum of Art. It vindicates the personal crusade of the Miami-based design historian Jewel Stern, an intrepid researcher of this overlooked figure and the co-curator of the show.
Danny Lewis in Smithsonian Magazine:
At first glance, Faig Ahmed’s carpets look like digital photos that didn’t load right the first time you clicked on them. Intricate patterns morph into messes of pixelation; blocks of color slide off like someone scrolled past them too fast; and some of the 2D mats look like they are bulging off a screen. But while they may appear to be software glitches or bad Photoshop editing, every one of Ahmed’s carpets are hand-woven – bugs and all.
Ahmed is an Azerbaijani artist who takes inspiration from traditional carpets made by craftsmen in his country. The artisanal carpets are made with intricate patterns and vibrant colors, both of which inspired Ahmed to begin working in textiles after years of focusing on paintings, video and installation art, Kate Sierzputowski writes for Colossal.
“Patterns and ornaments can be found in all cultures, sometimes similar, sometimes very different,” Ahmed tells Sierzputowski. “I consider them words and phrases that can be read and translated to a language we understand.”
Azerbaijani carpets are a prized around their world both for their beautiful patterns and the craftsmanship it takes to create such detailed, delicate pieces. The skills used to make the carpets are passed down through generations by family members, according to UNESCO. Traditionally, the carpets are dyed and woven in the winter by female household members who use special techniques to create intricate designs in the fabric. Carpets are often made to celebrate special occasions like weddings, a child’s birth and religious rites. Although the carpets carry much cultural significance, Ahmed takes pleasure pushing the craft's boundaries in his artwork.
Ken Roth in Politico:
The horrendous Paris attacks have provided certain European and U.S. politicians with an irresistible opportunity to attempt to close the door on refugees while seeking to expand overbroad government surveillance. It is important, out of principle and for our collective safety, to reject these appeals to fear and prejudice, as some political leaders have done.
A fake Syrian passport reportedly left behind by an attacker of unknown identity and nationality suggests that he entered Europe with the recent refugee flow. The presence of the passport may reflect a deliberate effort by the so-called Islamic State to stigmatize the people who dare flee its “caliphate.” It has led to a chorus of voices seeking to keep out refugees and asylum-seekers, despite the fact that theattackers identified so far have turned out to be citizens of Belgium and France.
The focus on refugees in the aftermath of the recent attacks is a dangerous distraction from Europe’s violent home-grown extremism. The roots of the problem are notoriously complex but relate in part to the social exclusion of past generations of immigrants — the persistent discrimination, hopelessness, and despair that pervades neighborhoods on the outskirts of certain cities.
The Grave, The Mine
Taking off from the city
at night, from the
airplane, I look at streetlights
below: hovering unfixed
sockets of light.
Then it is black beneath me.
A pair of headlights
veer slowly along a macadam
far from pianos and theaters.
Women are leaning back in taxis
Men stoop into taxis after them
and enter the well, the
grave, the tunnel, the mine
of fur and scent.
by Donald Hall
from Poetry Magazine
Carl Zimmer in The New York Times:
The agricultural revolution was one of the most profound events in human history, leading to the rise of modern civilization. Now, in the first study of its kind, an international team of scientists has found that after agriculture arrived in Europe 8,500 years ago, people’s DNA underwent widespread changes, altering their height, digestion, immune system and skin color. Researchers had found indirect clues of some of these alterations by studying the genomes of living Europeans. But the new study, they said, makes it possible to see the changes as they occurred over thousands of years.
...The original hunter-gatherers, descendants of people who had come from Africa, had dark skin as recently as 9,000 years ago. Farmers arriving from Anatolia were lighter, and this trait spread through Europe. Later, a new gene variant emerged that lightened European skin even more. Why? Scientists have long thought that light skin helped capture more vitamin D in sunlight at high latitudes. But early hunter-gatherers managed well with dark skin. Dr. Reich suggests that they got enough vitamin D in the meat they caught. He hypothesizes that it was the shift to agriculture, which reduced the intake of vitamin D, that may have triggered a change in skin color. The new collection of ancient DNA also allowed Dr. Reich and his colleagues to track the puzzling evolution of height in Europe. After sorting through 169 height-related genes, they found that Anatolian farmers were relatively tall, and the Yamnaya even taller.
Neil Spencer in The Guardian:
Frank Sinatra’s favourite time of day was dawn, especially the ice-blue desert dawn of Las Vegas, the signal that he had slaked his gargantuan thirst for fine music, fast company, beautiful women and booze. His customary bedtime was 7am, at which point the perpetual party he led and underwrote would evaporate. Sinatra’s need for distraction and his terror of solitude are a central theme of Sinatra: The Chairman, James Kaplan’s meticulously researched biography, this second volume marking the singer’s centenary. Kaplan takes up his story in 1954, when Sinatra’s Oscar for From Here to Eternity began “the greatest comeback in showbusiness history”, taking him from over-the-hill crooner to worldwide icon. Kaplan draws from previous biographies and the memoirs of Sinatra’s lovers and fellow travellers, but the pithy narrative is his own, as are his persuasive critiques of the music.
...For a decade or more, Sinatra ruled US showbiz like a medieval monarch, “welcoming worship and demanding fealty”. Las Vegas, then little more than a clutch of casinos stretched along a two-lane blacktop, became his favourite playground, the home of the self-styled Rat Pack led by Sinatra, Dean Martin and Sammy Davis Jr. His work schedule was unrelenting, involving up to four live shows a night and three movies a year (mostly Hollywood schlock studded with a few fine performances), quite aside from his cheesy TV programmes and carefully crafted recording sessions. His personal life was just as intense. There wereendless flings with starlets and hookers, more serious affairs with Lauren Bacall and Marilyn Monroe and, later, an ill-advised, short-lived marriage to Mia Farrow, 30 years his junior. Yet Frank remained entangled with his second wife, Ava Gardner, the woman who “taught him how to sing a torch song”, as Nelson Riddle observed. “She taught him the hard way.”
Monday, November 23, 2015
It is the 100th anniversary of the publication of what many have called the most beautiful scientific theory of all time
by S. Abbas Raza
In November of 1915, Albert Einstein published what would come to be known as his theory of general relativity. Ten years ago, I wrote as simple an explanation as I could of some of the more salient aspects of that theory, here at 3 Quarks Daily. I am republishing that article below.
General Relativity, Very Plainly
[NOTE: Since I wrote and published this essay last night, I have received a private email from Sean Carroll, who is the author of an excellent book on general relativity, as well as a comment on this post from Daryl McCullough, both pointing out the same error I made: I had said, as do many physics textbooks, that special relativity applies only to unaccelerated inertial frames, while general relativity applies to accelerated frames as well. This is not really true, and I am very grateful to both of them for pointing this out. With his permission, I have added Sean's email to me as a comment to this post, and I have corrected the error by removing the offending sentences.]
In June of this year, to commemorate the 100th anniversary of the publication of Einstein's original paper on special relativity, I wrote a Monday Musing column in which I attempted to explain some of the more salient aspects of that theory. In a comment on that post, Andrew wrote: "I loved the explanation. I hope you don't wait until the anniversary of general relativity to write a short essay that will plainly explain that theory." Thanks, Andrew. The rest of you must now pay the price for Andrew's flattery: I will attempt a brief, intuitive explanation of some of the well-known results of general relativity today. Before I do that, however, a caveat: the mathematics of general relativity is very advanced and well beyond my own rather basic knowledge. Indeed, Einstein himself needed help from professional mathematicians in formulating some of it, and well after general relativity was published (in 1915) some of the greatest mathematicians of the twentieth century (such as Kurt Gödel) continued to work on its mathematics, clarifying and providing stronger foundations for it. What this means is, my explication here will essentially not be mathematical, which it was in the case of special relativity. Instead, I want to use some of the concepts I introduced in explaining special relativity, and extend some of the intuitions gathered there, just as Einstein himself did in coming up with the general theory. Though my aims are more modest this time, I strongly urge you to read and understand the column on special relativity before you read the rest of this column. The SR column can be found here.
Before anything else, I would like to just make clear some basics like what acceleration is: it is a change in velocity. What is velocity? Velocity is a vector, which means that it is a quantity that has a direction associated with it. The other thing (besides direction) that specifies a velocity is speed. I hope we all know what speed is. So, there are two ways that the velocity of an object can change: 1) change in the object's speed, and 2) change in the object's direction of motion. These are the two ways that an object can accelerate. (In math, deceleration is just negative acceleration.) This means that an object whose speed is increasing or decreasing is said to be accelerating, but so is an object traveling in a circle with constant speed, for example, because its direction (the other aspect of velocity) is changing at any given instant.
Get ready because I'm just going to give it to you straight: the fundamental insight of GR is that acceleration is indistinguishable from gravity. (Technically, this is only true locally, as physicists would say, but we won't get into that here.) Out of this amazing notion come various aspects of GR that most of us have probably heard about: that gravity bends light; that the stronger gravity is, the more time slows down; that space is curved. The rest of this essay will give somewhat simplified explanations of how this is so.
by Ali Minai
The terrible terrorist attacks by ISIS in Paris on November 13 have understandably generated a great surge of opinion and analysis – some of it insightful and some just opportunistic. It is precisely at times like these that the volume of immediate response threatens to obscure deeper issues, and for a problem as deep as the threat of jihadi extremism, this is truly dangerous. While people are still reeling from the actual attacks and decision-makers are reaching for the most obvious – and frequently bad – choices, it is critical that policy-makers move towards a more realistic understanding of the conflict they face, and not make things worse than they are. Of course, history suggests that this likely to be a vain hope - especially since the proper course is far from clear. This motivation behind this article is not to prescribe specific actions, but to provide a general perspective that may trigger further thinking.
Following the Paris attacks, President Hollande of France declared, "France is at war!" Similar pronouncements have been made by world leaders, analysts and pundits since 9/11. Some see the conflict with jihadi terrorists as a "clash of civilizations"; others as a "battle of ideas", pitting modern liberal democracy against a regressive ideology. Yet others have declared it to be a "battle for the soul of Islam." Those wedded to conventional geopolitics see it in terms of military engagements, covert operations and counterinsurgency. There is some element of truth to all these characterizations, but only in the sense that the five blind men of India had some part of the truth about the elephant. What has remained largely unacknowledged is the terrible truth that this is the first war of its kind – a brand new thing never before seen in history, and therefore one for which there is no prior wisdom. It is the first great conflict of the age of globalization, and its phenomenology reflects that of a complex, nonlinear, self-organizing networked world. To make an imperfect analogy, it is to ordinary warfare what quantum physics is to Newtonian physics. It is a war where things don't add up normally, where distant events can be strangely entangled, where common sense may be a liability, and where the very geometry of comprehension is distorted.
With mirrors the aging face became personal.
It hung before only on the heads of others,
but with realization that the still surface
of a pond returned the image of the seer,
when polished metals revealed a clear and troubling truth,
when a silver-backed square of glass
served up serial images of hard fact so precisely
denial was impossible,
the aging face became a self portrait
in intimate time, like a film frame
on a reel of a fresh spring field
which, between glimpses,
had been raked by a ruthless gardener
determined to turn new life into that
which can only be remembered
by Carl Pierer
Among the many tools available to mathematicians attempting to prove a statement is something called "proof by contradiction" or reduction ad absurdum. The general method of the proof is a very smug one: Let the statement to be proved be Φ. The strategy, then, is to suppose that Φ is false and to consequently derive a contradiction. Now there are quite a few, infinitely many one might say, ways of going about this. This shan't be our concern. What is of interest here is the question as to what kind of contradiction forms the end of such a proof. Let us distinguish 2 cases:
- Internal contradiction. The proof takes the form of:
Suppose ¬Φ. Then γ.
Contradiction! ∴ Φ.
Here, we deduce an immediate consequence (γ) from the assumption that ¬Φ and then proceed by a sequence of logical steps (Δ) to show that this leads to ¬γ – a blatant contradiction. For example, let us say our statement to prove is:
Φ: is not a rational number
Then, ¬Φ is " is a rational number" and an immediate consequence of this would be:
γ: Then we can write √2 = m/n , where m,n are natural numbers, such that m,n are coprime. That is, m/n is an irreducible fraction.
One standard proof then goes as follows:
1. Then, 2 = m2/n2
2. So, 2n2 = m2
3. This means, m is even.
4. So m can be written as m = 2k for some natural number k.
5. Thus: 2n2 = (2k)2
6. Ergo, 2n2 = 4k2
7. And hence: n2 = 2k2
8. So n is even.
9. Then 2 divides both n and m.
10. Therefore, ¬γ.
Now, we have the desired contradiction, and therefore we conclude Φ.
- External contradiction. Here, the proofs take the following form:
Suppose ¬Φ. Then:
But ¬μ! Contradiction. Therefore, Φ.
Here, ¬μ is some generally accepted mathematical truth, such as 1 ≠ 2. In contrast to the internal contradiction, it need not be a statement that is a consequence of ¬Φ. ¬μ is some statement in the general corpus of mathematical truths (i.e. proven statements) and is not necessarily linked in its content to Φ.
Now, the astute reader might question the meaningfulness of the distinction between internal and external contradiction. In particular, since μ appeared in the proof of Φ, there must be a sequence of logical derivations that relate the two. So, one might ask, how much sense does it make to distinguish between a "direct" consequence of ¬Φ and one that is related by a longer sequence of logical steps? Certainly, formally speaking, such an objection is valid. But there seems to be an intuitive sense in which some consequences are more immediate than others, and this is all that is needed at this stage.
Wally Gilbert. Difference #2. 2015.
by Charlie Huenemann
Plato, as we know, told tales of an abstract realm beyond the senses, a realm beyond the dim and dark cave we call “the world.” It was a realm of forms, first glimpsed through the discipline of mathematics, and more thoroughly known through philosophical cross-examination, or dialectic. It’s not clear just how much religion there was in Plato’s own philosophy, but that philosophy certainly was enlarged into mystical proportions by the time of Plotinus (204-270 c.e.).
We can get a richer sense of this notion - that the pure intellect can grasp divinity - by exploring the life of Hypatia, a mathematician, astronomer, and philosopher who lived in the great city of Alexandria about a century after Plotinus. Hypatia was brilliant and utterly dedicated to the life of the intellect. She was famous as a philosopher and mathematician, and a school formed around her. She was also beautiful (it is said), and attracted many suitors; but she resisted them all in deference to the requirements of her philosophy. She became caught up in a power struggle between the city's governor and its Christian bishop, and met a grisly death at the hands of the bishop's supporters.
Hypatia's life and death has been refashioned many times over the centuries, usually in the attempt either to attack or to defend organized religion. Just reading a pair of book titles is enough to give the general idea. In 1720, the infamous atheist John Toland published Hypatia, or the History of a Most Beautiful, Most Virtuous, Most Learned and in Every Way Accomplished Lady; Who Was Torn to Pieces by the Clergy of Alexandria, to Gratify the Pride, Emulation, and Cruelty of the Archbishop, Commonly but Undeservedly Titled St. Cyril. (Earlier times featured the most informative book titles!) Toland's book was answered promptly in a pamphlet by Thomas Lewis, entitled The History of Hypatia, a Most Impudent School-Mistress of Alexandria: In Defense of St. Cyril and the Alexandrian Clergy from the Aspersions of Mr. Toland. I know of these titles from a more recent work with the decidedly more neutral title, Hypatia of Alexandria, by Maria Dzielska (1995). Dzielska offers an overview of all the various uses in both fiction and scholarly literature to which Hypatia has been put to use, and then delivers a very plausible and thorough account of what we can plausibly put forward as the facts of the case.
by Brooks Riley
by Mara Naselli
One evening in February 2012, I was in a Chicago noodle shop looking for a table for one. The television was on—a news report from Syria. The Syrian Army had begun its attack on Homs. The frame of the screen, jostling in the confusion, captured the faces of a woman and a boy. The woman was distraught. The boy, bewildered. I watched agape, for an instant transposing myself in the place of the woman and my own sons in the place of the boy. Children cannot take in their shattering world. The slight young man waiting tables that evening must have seen something in my expression. He changed the channel to a soccer match.
The poet and artist David Jones was just nineteen years old when he enlisted in the Royal Welsh Fusiliers as an infantryman in the British army in January 1915. Later that year, just after his twentieth birthday, he was serving on the western front until he was wounded in July 1916.
For the next two decades Jones wrote In Parenthesis, his account of his experiences in the First World War. It is not an easy read. There are many different kinds of language at work in Jones’s modern epic—not just the Welsh, English, and Cockney of the infantrymen and officers, but also the military jargon and slang, rhymes, and popular songs. There are also the well-known allusions to Arthurian legend, Shakespeare, Lewis Carroll, Hopkins, Coleridge, and others. Jones’s language—its syntax, sound, and diction—was so foreign to me, I found myself enchanted and lost. I copied out long passages, including the footnotes, to track this myriad mind at work.
A Professor To His Coy Doctoral Student
(with apologies to Andrew Marvell)
Had we but world enough, and time,
Procrastination were no crime.
We would sit down and think, and talk,
Sketch plans for drafts in yellow chalk,
Read, discuss, and once again read . . .
We'd hardly ever feel a need
To put ourselves upon the rack
And pick up pen or plug in Mac.
Thou in the library would find
Countless delights to charm thy mind.
An hundred years you there might spend
Perusing volumes without end,
Gathering insights, culling quotes,
Checking references and notes,
Rounding out your self-instruction,
Just to draft your Introduction.
Two hundred more to settle on
A good title for Chapter One;
And thirty thousand for the next
Ten pages of completed text.
An equal time I'd grant for you
Simply to outline Chapter Two.
And after that, at least an age
To bring perfection to each page;
'Til you, clearing each confusion,
Reach your breathtaking conclusion.
But looming up ahead, I fear,
The final deadline drawing near.
And after that before you lie
Deserts of aidless penury.
And then your struggle will indeed
Be hard, with nought on which to feed
Save thoughts and theories from the past.
Do you with these wish to hold fast?
Ideas may be food for thought
But you need quite another sort
Of sustenance, else hunger must
Reduce you and your dreams to dust.
The grave is not the worst of states,
But no-one from there graduates.
Now, therefore, while upon you lies
The sheen of youth; and in your eyes
A gleam of sense can be discerned,
Make use of all that you have learned!
Don't wait 'til you're beneath the net
Of unpaid bills and mounting debt,
With spouses nagging in your ear
About your lack of a career,
And kids who keep you up all night,
And pee all over what you write.
Abandon your imprudent ways!
Bring to an end your student days.
Though you may not have wisdom's keys,
At least you will pay no more fees.
And if, having fir'd your best shot,
You realize that you have not
Broken through the gates of knowledge--
You'll at least be out of college.
by Emrys Westacott
by Sue Hubbard
Until 19th October 2015, Ambika P3 Gallery, University of Westminster, London
The Belgian filmmaker and artist Chantal Akerman died suddenly on October 5. It is said to have been suicide. Maybe it was her nationality, the nature of her death or her multi-screen installations with their themes of alienation, interiority, conflict and violence that drew me, in these complex de-centred times, to write about her now. A self-imposed death, whether of an artist or a suicide bomber, is always an enigma and the nature of her demise can't but help colour our view of her work, which seems to echo the mood of these sombre days with uncanny prescience.
Born in 1950, an adolescent viewing of Jean-Luc Godard's Pierrot Le Fou (1965) decided her career as a film-maker. After moving to Paris she took part in the seminal events of May 1968, then in New York met the cinematographer Babette Mangolte and hung out in avant-garde circles with the likes of Jonas Mekas and Michael Snow. Mostly widely known as a film-maker, her Jeanne Dielman, 23 Quai du Commerce, 1080 Bruxelles, made in 1975 when she was 24, is said to have influenced film makers from Michael Haneke to Todd Haynes. But it was to the cavernous underground industrial space of The University of Westminster's Ambika P3 gallery that I went to see, what has turned out to be, her swan-song exhibition. The central work, NOW, was commissioned for this year's Venice Biennale. Akerman was working with curators on the show until close to her death.
Her work requires patience, like the reading of a complex modernist poem. It unfolds slowly, so there is not an obvious sense of a coherent whole but rather images that fit together to create associations and metaphors. Maniac Summer (2009) is a disquieting piece that explores, among other things, the passing of time. A digital clock counts the seconds of each recording, evoking Hereklitian notions of being unable to step into the same river twice. Though, of course, the irony is that the technical innovation of video allows for a constant revisiting. Shot from the vantage point of her surprisingly bourgeois Parisian apartment, the camera is left unattended so we see her at her desk fiddling on her mobile phone and taking care of daily appointments, pottering around her kitchen amid normal domestic clutter, or isolated alone in dark silhouette. Outside children play in the park and the camera pans along empty streets, their pulled shutters closed like eyelids. Some of the images are manipulated, moving from colour to black and white. Shadows appear smudged on the wall like the afterglow of a nuclear holocaust. There is singing or, perhaps, chanting. Doors bang. This is the minutiae of life. Yet there's a sense that everything is vulnerable, everything transient. That all we will leave behind are traces.
Sunday, November 22, 2015
Christian Lorentzen in New York Magazine:
Like most Norwegian schoolchildren of his generation, Karl Ove Knausgaard started learning English at the age of 10. The curriculum didn’t extend to the study of literature, so he had to come to British and American authors on his own. Though he says the opposite, his English is excellent, but there were two words I used that he didn’t know: placid (crucial because he grew up in a placid country, but in a home that was anything but); and refinement (crucial because his prose is marked by its high variance of refinement, veering between the cooked and the raw). I met Knausgaard on a recent afternoon outside the offices of the New YorkTimes, which had just published his review of Michel Houellebecq’sSubmission. (Our meeting occurred before the attacks in Paris.)
We walked east a few blocks and up to 44th Street for a drink at the Blue Bar of the Algonquin Hotel. I was disappointed to learn that massive international literary celebrity is such that you can pass through Times Square without being stopped by a fan. Knausgaard stands about six-foot-six, and his hair and beard at age 46 are a touch grayer than they appear on the cover of book two of his My Struggle series — the fourth of six volumes that appeared in English translation last spring. Two nights before he had been fêted at a gala at the New York Public Library, and he would be again that night at MoMA. At the Algonquin, Knausgaard had a black coffee and a Diet Coke, and I had a bloody Mary. I’ve been told that I’m a laconic interlocutor and in this Knausgaard was more than my match; on the recording of our conversation, the long pauses are filled with Sinatra songs playing from the bar’s speakers.
Tom Simonite in MIT Technology Review:
A professor’s claim to have created an algorithm that dramatically simplifies one of theoretical computer science’s most notorious problems has experts preparing to reconsider a long-established truth of their field. It is also seen as a reminder that similar algorithmic breakthroughs are possible that could weaken the tough-to-crack problems at the heart of the cryptography protecting the world’s digital secrets.
In a packed lecture theater on Tuesday and Thursday this week, University of Chicago professor László Babai gave the first two of a series of three lectures describing his new solution to a problem called graph isomorphism. The problem asks a computer to determine whether two different graphs—in the sense of a collection of “nodes” connected into a network, like a social graph—are in fact different representations of the same thing.
Babai’s lectures have caused excitement because graph isomorphism is known as a very challenging problem believed for more than 30 years to be not too far from the very hardest class of problems for computers to solve. If Babai is right, it is in fact much closer to falling into the class of problems that can be solved efficiently by computers, a category known as P (see “What Does ‘P vs. NP’ Mean for the Rest of Us?”).
“This has caught everyone’s imagination because the improvement is so large,” says Richard Lipton, a professor who works on theoretical computer science at Georgia Institute of Technology. “He got it down to a much lower class.” After MIT associate professor Scott Aaronson heard about Babai’s claim, he bloggedthat it could be “the theoretical computer science result of the decade.”
Janine di Giovanni and Noah Goldberg in Newsweek:
At the annual Manama Dialogue in Bahrain, an elite Middle East security summit held in late October, the keynote speaker was Egyptian President Abdel-Fattah el-Sissi. Surrounded by a phalanx of bodyguards, General el-Sissi took the stage and addressed a packed audience of ministers, diplomats and senior U.S. State Department officials. He talked of Egypt’s role in the conflicts in Syria, Yemen and Libya.
But the general’s main concern—as a staunch military man who has been in the army since the age of 23—was the rise of armed factions in the Middle East. “We are concerned by the undoing of the national state and the rule of law by armed militias,” el-Sissi emphasized at the gathering, just days before a Russian passenger plane went down in the Sinai Desert in what increasingly looks like a bombing by the Islamic State militant group (ISIS).
After being lauded at Manama, where he met with the German minister of defense and other bigwigs, el-Sissi flew home to take stock of the Russian airline crisis and to prepare for an official visit to Britain, where he would meet with Prime Minister David Cameron. After a period when the West was cautious about his ascent to power, el-Sissi’s visit underlined the fact that he is firmly back at international diplomacy’s top table, just weeks after Egypt was elected to the U.N. Security Council.
More here. [Thanks to Ken Roth.]
Hari Raghavan in Avidly:
In the second or third grade, a girl in my class took me aside one day while we both washed our hands to give one of my arms a thorough inspection. I was confused, but submitted anyways. That was my way, back then, when things happened to me that I didn’t quite grasp. I’d smile benignly, I’d wait it out, I’d make light of it after. So, when she returned my arm to me with a grin, and said more to herself than me – in a tone of triumph I still remember vividly – “So it doesn’t wash off,” I didn’t think twice. Things hadn’t quite clicked yet. Hearing the story later, though, my mother, in her kind and knowing way, sighed, took my hands in hers, and asked me: “Was this girl white?” I should mention, here, that this experience wasn’t to me what it might’ve been to someone else. It didn’t shape me profoundly, it didn’t alter the course my life took (to my knowledge), it didn’t activate my political consciousness (that happened later). In fact, I didn’t even so much as bring it up with that girl again. I think instead I went to her birthday party, which had a tea party theme I hated but cupcakes I fucking ruined. I also became convinced thereafter that white people (of which there were multitudes in the Portland suburb that raised me) had it made. They had an ease to them I couldn’t find elsewhere, a cool and confidence in the way they navigated the world. They could say what they liked about race and think nothing of the attendant complications. They literally could get away with murder. I coveted the freedom they enjoyed, and the space they claimed for themselves.
I was reminded of all this anew when I recently started watching Netflix’s remarkable new series, Master of None. In its premiere episode, the show’s main character Dev (Aziz Ansari) spends a day babysitting his friends’ two white kids who couldn’t be more terrible (my opinion). They visit a frozen yogurt shop, where one of the kids promptly proceeds to point at strangers and yell out their ethnicities: “Black! Chinese man!” The scene is hilarious, obviously, but it’s strangely incisive, too, about how at ease white people– especially children!– are with classifying and claiming space for themselves by pointing fingers at others.
Ta-Nehisi Coates writes of just this phenomenon in his memoir Between the World and Me, when he thinks back to a scene he once observed in Harlem – young white parents, letting their son run free ahead of them, stomping and screaming as he pleases. Coates laments that many children of color may never know this same joy or how to claim space themselves, for the realities their parents fear; his own child was once shoved on an escalator by a white woman twice his age. It’s not that the people who live with this certainty mean to offend in the questions they ask or things they grab. It’s that they have a unique ability to claim space (or feel entitled to it) while forgetting how the people of color around them, conversing with them, are constantly compelled to shrink themselves.
It’s a delicate kind of shape-shifting that takes an eventual toll, and Aziz Ansari, first generation South Asian that he is, seems to know something of the burden in the question his show asks repeatedly: How can he (and I by extension, and others like us) carve out and claim space in a culture or establishment that doesn’t allow us the room?