Tech billionaire Elon Musk is announcing a new venture called Neuralink focused on linking brains to computers. The company plans to develop brain implants that can treat neural disorders—and that may one day be powerful enough to put humanity on a more even footing with possible future superintelligent computers, according to a Wall Street Journal report citing unnamed sources. Musk, a founder of both the electric-car company Tesla Motors and the private space-exploration firm SpaceX, has become an outspoken doomsayer about the threat artificial intelligence might one day pose to the human race. Continued growth in AI cognitive capabilities, he and like-minded critics suggest, could lead to machines that can outthink and outmaneuver humans with whom they might have little in common. In a tweet Tuesday, Musk gave few details beyond confirming Neuralink's name and tersely noting the "existential risk" of failing to pursue direct brain-interface work.
STIMULATING THE BRAIN
Some neuroscientists and futurists, however, caution against making overly broad claims for neural interfaces. Hooking a brain up directly to electronics is itself not new. Doctors implant electrodes in brains to deliver stimulation for treating such conditions as Parkinson's disease, epilepsy and chronic pain. In experiments, implanted sensors have let paralyzed people use brain signals to operate computers and move robotic arms. Last year , researchers reported that a man regained some movement in his own hand with a brain implant. Musk's proposal goes beyond this. Although nothing is developed yet, the company wants to build on those existing medical treatments as well as one day work on surgeries that could improve cognitive functioning, according to the Journal article. Neuralink is not the only company working on artificial intelligence for the brain. Entrepreneur Bryan Johnson, who sold his previous payments startup Braintree to PayPal for $800 million, last year started Kernel, a company working on "advanced neural interfaces" to treat disease and extend cognition.
The author of the surprise bestseller How to Talk About Books You Haven’t Read, Pierre Bayard, is a standard-bearer for today’s highbrow aliterates. Bayard is a college professor of French literature, a position that paradoxically leaves him with “no way to avoid commenting on books that most of the time I haven’t even opened” (or, for that matter, has ever had any desire to open). And this is nothing he feels any shame or anxiety about. Not reading, Bayard believes, is in many cases preferable to reading and may allow for a superior form of literary criticism—one that is more creative and doesn’t run the risk of getting lost in all the messy details of a text. Actual books are thus “rendered hypothetical,” replaced by virtual books in phantom libraries that represent an inner, fantasy scriptorium or shared social consciousness.
Assuming that Bayard’s tongue isn’t stuck too far in his cheek, one can interpret his reasoning as an argument that not reading books can be a cultured activity in itself, a way of expressing one’s faith in and affection for literature. More often, however, top-down aliteracy only expresses weariness, cynicism, and even contempt for the written word.
My first exposure to this type of thinking came, naturally enough, while studying English literature in university. Academics, for no good reason whatsoever, are expected to publish a great deal of stuff that nobody—and I mean nobody—reads.
Early in her novel Wise Blood, Flannery O’Connor describes protagonist Hazel Motes, leader of the Church without Christ, by the silhouette he casts on the sidewalk. “Haze’s shadow,” she writes, “was now behind him and now before him.” It’s a strange way to situate a character — skulking between his shadows — but it’s not unprecedented. In The Waste Land, T.S. Eliot’s narrator refers to “Your shadow at morning striding behind you/Or your shadow at evening rising to meet you.” Coincidence? Nobody can say for certain. But in the rare case of a critic linking O’Connor and Eliot,Sally Fitzgerald (O’Connor’s close friend) wrote that “it was Eliot and his Waste Land who provided for her the first impetus to write such a book as Wise Blood.”
Harold Bloom, the literary critic who thrives on making such connections, famously argued that great writers, burdened by what he called the “anxiety of influence,” subconsciously misread established literary giants to achieve originality. But in this case, O’Connor is not misreading Eliot. She’s answering him. The Waste Land delivers a darkly poetic proposition. Every line relentlessly reiterates the theme that, in the wake of World War One, hope had been leached from life. Existence, in the poem’s assessment, culminates in a word one rueful lover repeats in The Waste Land’s second section: “Nothing . . . Nothing. . . nothing . . .nothing . . .Nothing.”
The revival of interest in the conflicts and the violence that mark American history proved enormously fruitful. In 1969, in a beautiful book that was his final reckoning with The Progressive Historians, Hofstadter himself acknowledged the limitations of the consensus approach, singling out the Civil War as a historic convulsion that scarcely exemplified the pragmatic genius of American politics. In some ways this was not surprising. Hofstadter had been influenced by Marxism when he was young, and he was one of the first historians to blow the whistle on U. B. Phillips’ romanticized histories of slavery. Nor should it surprise us that in the 1960s Marxism became the most effective means by which historians recovered the fundamental issues at stake in the Civil War—although it was a Marxism that accepted the structural foundations of the conflict between the North and the South but went on to examine the political and ideological manifestations of that conflict.
I think of Judith Stein’s work as having emerged from that same intellectual ferment. Attentive to class divisions, but always sensitive to the unpredictable ways class conflict has played out in American politics. It’s that sensitivity to the particularities of time and place that has repeatedly sent Judith off the archives and makes her such an industrious researcher. She had a set of priorities but no predetermined answer. Who knew, for example, that it was the foreign policy apparatus that prevented the federal government from protecting American workers from unfair trade practices during the 1970s?
Interviews by Ben Beaumont-Thomas in The Guardian:
Vincent Connare, typographer
I was working for Microsoft’s typography team, which had a lot of dealings with people from applications like Publisher, Creative Writer and Encarta. They wanted all kinds of fonts – a lot of them strange and childlike. One program was called Microsoft Bob, which was designed to make computers more accessible to children. I booted it up and out walked this cartoon dog, talking with a speech bubble in Times New Roman. Dogs don’t talk in Times New Roman! Conceptually, it made no sense.
So I had an idea to make a comic-style text and started looking at Watchmen and Dark Knight Returns, graphic novels where the hand lettering was like a typeface. I could have scanned it in and copied the lettering, but that was unethical. Instead, I looked at various letters and tried to mimic them on screen. There were no sketches or studies – it was just me drawing with a mouse, deleting whatever was wrong.
I didn’t have to make straight lines, I didn’t have to make things look right, and that’s what I found fun. I was breaking the typography rules. My boss Robert Norton, whose mother Mary Norton wrote The Borrowers, said the “p” and “q” should mirror each other perfectly. I said: “No, it’s supposed to be wrong!” There were a lot of problems like that at Microsoft, a lot of fights, though not physical ones.
As he was brushing his teeth on the morning of July 17, 2014, Thomas Royen, a little-known retired German statistician, suddenly lit upon the proof of a famous conjecture at the intersection of geometry, probability theory and statistics that had eluded top experts for decades.
Known as the Gaussian correlation inequality (GCI), the conjecture originated in the 1950s, was posed in its most elegant form in 1972 and has held mathematicians in its thrall ever since. “I know of people who worked on it for 40 years,” said Donald Richards, a statistician at Pennsylvania State University. “I myself worked on it for 30 years.”
Royen hadn’t given the Gaussian correlation inequality much thought before the “raw idea” for how to prove it came to him over the bathroom sink. Formerly an employee of a pharmaceutical company, he had moved on to a small technical university in Bingen, Germany, in 1985 in order to have more time to improve the statistical formulas that he and other industry statisticians used to make sense of drug-trial data. In July 2014, still at work on his formulas as a 67-year-old retiree, Royen found that the GCI could be extended into a statement about statistical distributions he had long specialized in. On the morning of the 17th, he saw how to calculate a key derivative for this extended GCI that unlocked the proof. “The evening of this day, my first draft of the proof was written,” he said.
To paraphrase Mark Twain, an infamous book is one that people castigate but do not read. Perhaps no modern work better fits this description than The Bell Curveby political scientist Charles Murray and the late psychologist Richard J. Herrnstein. Published in 1994, the book is a sprawling (872 pages) but surprisingly entertaining analysis of the increasing importance of cognitive ability in the United States. It also included two chapters that addressed well-known racial differences in IQ scores (chapters 13-14). After a few cautious and thoughtful reviews, the book was excoriated by academics and popular science writers alike. A kind of grotesque mythology grew around it. It was depicted as a tome of racial antipathy; a thinly veiled expression of its authors’ bigotry; an epic scientific fraud, full of slipshod scholarship and outright lies. As hostile reviews piled up, the real Bell Curve, a sober and judiciously argued book, was eclipsed by a fictitious alternative. This fictitious Bell Curve still inspires enmity; and its surviving co-author is still caricatured as a racist, a classist, an elitist, and a white nationalist.
It is easy to dismiss this outburst as an ill-informed spasm of overzealous college students, but their ignorance of The Bell Curve and its author is widely shared among social scientists, journalists, and the intelligentsia more broadly. Even media outlets that later lamented the Middlebury debacle had published – and continue to publish – opinion pieces that promoted the fictitious Bell Curve, a pseudoscientific manifesto of bigotry.
We finished clearing the last Section of trail by noon, High on the ridge-side Two thousand feet above the creek Reached the pass, went on Beyond the white pine groves, Granite shoulders, to a small Green meadow watered by the snow, Edged with Aspen—sun Straight high and blazing But the air was cool. Ate a cold fried trout in the Trembling shadows. I spied A glitter, and found a flake Black volcanic glass—obsidian— By a flower. Hands and knees Pushing the Bear grass, thousands Of arrowhead leavings over a Hundred yards. Not one good Head, just razor flakes On a hill snowed all but summer, A land of fat summer deer, They came to camp. On their Own trails. I followed my own Trail here. Picked up the cold-drill, Pick, singlejack, and sack Of dynamite. Ten thousand years.
by Gary Snyder from Riprap and Cold Mountain Poems Shoemaker & Hoard Publishers.
Oncologists know that men are more prone to cancer than women; one in two men will develop some form of the disease in a lifetime, compared with one in three women.But until recently, scientists have been unable to pinpoint why. In the past, they theorized that men were more likely than women to encounter carcinogens through factors such as cigarette smoking and factory work. Yet the ratio of men with cancer to women with cancer remained largely unchanged across time, even as women began to smoke and enter the workforce in greater numbers. Pediatric cancer specialists also noted a similar “male bias to cancer” among babies and very young children with leukemia. “It’s not simply exposures over a lifetime,” explains Andrew Lane, assistant professor of medicine and a researcher at the Dana-Farber Cancer Institute. “It’s something intrinsic in the male and female system.” Now, discoveries by Lane and the Broad Institute of Harvard and MIT reveal that genetic differences between males and females may account for some of the imbalance. A physician-researcher who studies the genetics of leukemia and potential treatments, Lane says that he and others noted that men with certain types of leukemia often possess mutations on genes located on the X chromosome. These mutations damage tumor-suppressor genes, which normally halt the rampant cell division that triggers cancer.
Lane initially reasoned that females, who have two X chromosomes, would be less prone to these cancers because they have two copies of each tumor suppressor gene. In contrast, men have an X and a Y chromosome—or just one copy of the protective genes, which could be “taken out” by mutation. But the problem with that hypothesis, Lane says, was a “fascinating phenomenon from basic undergraduate biology called X-inactivation.” In a female embryo, he explains, cells randomly inactivate one of the two X chromosomes. “When a female cell divides, it remembers which X chromosome is shut down, and it keeps it shut down for all of its progeny.” If female cells have only one X chromosome working at a time, then they should be just as likely as male cells to experience cancer-causing gene mutations. So Lane and his team dug deeper into existing studies and encountered a little-known and surprising finding: “There are about 800 genes on the X chromosome,” he says, “and for reasons that are still unclear, about 50 genes on that inactive X chromosome stay on.” In a “big Aha! moment,” Lane’s group realized that those gene mutations common in men with leukemia were located on genes that continue to function on women’s inactive chromosome. The researchers dubbed those genes EXITS for “Escape from X-Inactivation Tumor Suppressors.” Women, Lane explains, thus have some relative protection against cancer cells becoming cancer because they, unlike men, do have two copies of these tumor-suppressor genes functioning at all times.
Growing human tissue is a huge challenge for researchers, even on a small scale. But some ultra-creative scientists hit on a potential solution last week when they flushed out a plant's cells and injected human cells in their place. That was how they got heart cells to beat on a spinach leaf. A major issue in tissue regeneration is creating a vascular system that ensures blood can flow to the tissue and deliver all-important oxygen and nutrients to keep the tissue alive and growing. Current techniques, including 3D printing, as innovative as it is, can't yet create the blood vessels and tinier capillaries needed in a circulatory system. But guess what's abundant and already has lots of veins? Plants, that's what. Researchers from Worcester Polytechnical Institute in Massachusetts, Arkansas State University-Jonesboro, and the University of Wisconsin-Madison hope to use plants as "scaffolds" to grow human tissue. For a proof-of-concept experiment, which will be published in the May issue of Biomaterials, WPI biomedical engineering graduate student Joshua Gerslak cleared out spinach leaves' plant cells by flushing a detergent solution through the stem.
…Down the line, researchers may be able to use this technique on multiple spinach leaves to create heart tissue, which could be grafted on to the hearts of people who've had heart attacks. (Parts of survivors' hearts have died from a lack of blood flow and no longer contract properly; other researchers are looking into using stem cells to repair this tissue.) While this is all super cool and exciting, we're many years away from any salad-based heart patches. The team was able to flush the cells out of other plants including parsley, peanut hairy roots, and sweet wormwood, and they think the technique could be adapted to work with other plants that would be a good match to grow certain types of human cells. They wrote:
"The spinach leaf might be better suited for a highly-vascularized tissue, like cardiac tissue, whereas the cylindrical hollow structure of the stem of Impatiens capensis (jewelweed) might better suit an arterial graft. Conversely, the vascular columns of wood might be useful in bone engineering due to their relative strength and geometries."
This is far from the only lab looking to the plant world for body parts: One Canadian researcher is working on making ears out of apples. The phrase "you are what you eat" suddenly takes on a whole new meaning, doesn't it?
Muneeza Shamsie reviews Only the Longest Threads by Tasneem Zehra Husain in Newsweek Pakistan:
Her novel is framed and juxtaposed by the growing friendship between Sara Byrne, a theoretical physicist, and Leonardo Santorini, a science journalist. They are both in Geneva on July 4, 2012, among an expectant and excited crowd, to witness a historic event: proof of the Higgs boson’s existence. This elusive subatomic particle so crucial to the understanding of the universe and its building blocks is revealed onscreen in an auditorium and becomes reality when the underground Large Hadron Collider creates such a high-speed collision of protons that it releases energy and shortlived particles, akin to the Big Bang—the birth of the universe.
Sara, heady from the jubilation of the moment, encourages Leo to move beyond the immediacy of journalism to the imaginative realms of fiction. He wants to recreate those moments of intensity and joy which impelled scientists in their search for answers. Sara says, “Theoretical physics is largely a private matter, a life lived out in the mind.” Leo captures this in the six stories he creates. In each, he employs a different narrator. In each, he welds scientific ideas of the era in which the narrator lives with the language, intonations, references, and lifestyle of that time. Hussain enhances her narrative by creating an email exchange between them that gives further context to Leo’s stories. He sends all six to her for comment in three installments. He then asks her to write the seventh one, on string theory.
Just to give some idea of what killing the NEA will (or more aptly, will not) accomplish, the $146 million budget of the National Endowment for the Arts represents just 0.012% (about one one-hundredth of one percent) of our federal discretionary spending. According to 2012 NEA figures, the annual budget for the arts per capita (in dollars) in Germany was $19.81; in England, $13.54; in Australia, $8.16; in Canada, $5.19, and in the United States just $0.47. Yes, 47 cents annually per capita. For all the arts combined. And the new POTUS feels that’s too much.
It would be impossible to enumerate all the programs that will likely die when the NEA and the NEH are killed, and the many people these cuts will deprive of things like public television programming and National Public Radio; school enrichment programs in the arts; and community programs to encourage music, dance, theater, visual art and literary art, literacy, and the pleasure of reading.
In September 2013, Marko Ahtisaari resigned from his position as the head of product design at Nokia. The Finnish company had just been acquired by Microsoft and Ahtisaari, the son of a former president of Finland, decided it was time to look for his next startup. He joined the MIT Media Lab shortly after, where he was introduced by Joi Ito, the Lab’s director, to Ketki Karanam, a biologist who was studying how music affects the brain. Ahtisaari was naturally interested: he grew up playing the violin and later studied music composition at Columbia University. “I used to be part of the New York scene,” Ahtisaari says. “I left to do product design and to be an entrepreneur. For 15 years I didn’t play much. I have friends who are now playing with Tom Yorke and the Red Hot Chili Peppers.”
Karanam showed Ahtisaari that there was an increasing body of evidence based on imaging studies that showed what happens to the brain when exposed to music. “It fires very broadly,” Ahtisaari. “It’s not just the auditory cortex. What happens is essentially similar to when we take psycho-stimulants. In other words, when we take drugs.”
To Ahtisaari, this indicated that music could, at least in principle, complement or even replace the effects that pharmaceuticals had on our neurology. For instance, there were studies that showed that patients with Parkinson’s disease improved their gait when listening to a song with the right beat pattern.
Holmes’s “This Long Pursuit” is itself a complement to two earlier volumes: “Footsteps: Adventures of a Romantic Biographer” (1985) and “Sidetracks: Explorations of a Romantic Biographer” (2000). All three are, essentially, collections of essays, talks, reminiscences and reviews held together by their author’s description of himself as a “romantic biographer.” That phrase carries multiple meanings: While Holmes’s field is, roughly, England in the age of Coleridge, he sometimes writes about romantic figures of other nations and periods (poet Gérard de Nerval, novelist Robert Louis Stevenson) and he himself clearly possesses an adventurous, romantic spirit.
In this new book’s first essay, “Travelling,” Holmes suggests that “biography is not merely a mode of historical enquiry. It is an act of imaginative faith.” To attain the requisite empathy, he early on adopted two key practices. The first he called the Footsteps principle. “I had come to believe that the serious biographer must physically pursue his subject through the past,” he explains. “Mere archives were not enough. He must go to all the places where the subject had ever lived or worked, or travelled or dreamed.” The biographer must then try to grasp their impact on his subject. “He must step back, step down, step inside the story.”