Monday, August 31, 2015
Fearing Artificial Intelligence
by Ali Minai
Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what "they" have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved "human-level intelligence", or that a robot called Eugene had "passed the Turing Test". Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?
Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.
The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.
In this article, I will consider these two cases separately.
The Rise of Intelligent Algorithms:
The socioeconomic and moral concerns emerge from the fact that the algorithms being developed under the aegis of artificial intelligence are slowly but surely automating capabilities that, until recently, were considered uniquely human. These include perceptual tasks such as speech recognition and visual processing, motor tasks such as driving cars, and even cognitive tasks such as summarizing documents, writing texts, analyzing data, making decisions and even discovering new knowledge. Of course, the same thing happened to repetitive tasks such as assembly line work decades ago, but humans had always seen that kind of work as mechanical drudgery, and though it caused great socioeconomic upheaval in the blue-collar workforce, society in general saw such automation as a positive. The automation of large-scale calculation raised even fewer issues because that was not something humans could do well anyway, and computers simply enhanced efficiency. The new wave of automation, however, is very different.
At the socioeconomic level, this process threatens white-collar workers and strikes at the core of what had been considered fundamentally human – work that could only be done by thinking beings. The concrete fear is that, eventually, intelligent machine will take over all the jobs humans do, including the most complex ones. What, then, will become of humans? A recent article in the Economist lays out both the nature of this threat and the reasons why it does not justify a general panic about AI as implied in the warnings of Hawking et al. The article argues – correctly – that the algorithms underlying the panoply of automated tasks are currently a disparate set of narrowly-focused tools, and their integration into a single generalized "true" artificial intelligence is only a remote possibility. A thoughtful article by Derek Thompson in the July/August issue of the Atlantic goes even further, exploring ways in which a work-free society might be more creative and intellectually productive than one where most time is spent "on the job". Whatever happens, one thing is certain: Intelligent algorithms will certainly transform human society in major ways. And one of these will be to challenge our bedrock notions of ourselves: As more and more abilities that we had considered essentially human – thinking, planning, linguistic expression, science, art – become automated, it will become harder to avoid the question if these too are, like routine tasks and calculations, just material processes after all.
Among the many human capacities being automated is the capacity to make complex, life-or-death decisions with no human involvement – which poses the moral dilemma motivating much of the immediate anxiety about AI. The explicit concern is currently about weapons, but similar situations can arise with medical procedures, driverless cars, etc. The question being asked is this: Who is responsible for a lethal act committed as the result of an algorithm whose outcomes its human builders did not explicitly program and/or could not reasonably have predicted? In one sense, this is not really a new issue at all. A "dumb" heat-seeking missile can also make a lethal "decision" in a way that its human designers could not have anticipated. However, what distinguishes LAWS from such arbitrary cases is the presence of deliberation: A smart autonomous weapon would analyze data, recognize patterns and make a calculated decision. This is seen as much closer to cold-blooded assassination than the errant heat-seeking missile, which would be regarded as an accident. However, this dilemma results mainly from a failure to recognize the nature of the mind, and especially the nature of autonomy.
Autonomy is a vexing concept that moves one quickly from the safe terrain of science and engineering into the quagmire of philosophy. Broadly speaking, we ascribe some autonomy to almost all animals. The ant pushing a seed and the chameleon snagging an insect are considered to be acting as individuals. However, we also assume that animals such as these are basically obeying the call of their instinct or responding to immediate stimuli. Moving further up the phylogenetic tree, we gradually begin to assign greater agency to animals – the bird that cares for its young, the dog that understands verbal commands, the bull elephant that leads its herd – but this agency is still rather limited, and does not rise to the point of holding the animal truly responsible for its actions. That changes when we reach the human animal. Suddenly (evolutionarily speaking), we have an animal that thinks, understands, evaluates, and makes moral choices. This animal has a mind! With mind comes intelligence and the ability to make deliberate, thoughtful decisions – free will. But is this a reasonable description of reality?
Nothing that we know of in terms of its physical nature suggests that human animal is in any way qualitatively different than other animals with central nervous systems. Thus, the attributes identified with the mind – including intelligence – must either arise from some uniquely human non-material essence – soul, spirit, mind – or be present in all animals, though perhaps in varying degree. The former position – termed mind-body dualism (or just dualism) – is firmly rejected by modern science, which postulates that mind must emerge from the physical body. Indeed, this is the whole basis of the artificial intelligence project, which seeks to build other physical entities with minds. How to do this is an issue that has engaged computer scientists, philosophers and biologists for decades and is far from settled. I have expressed my own opinions on this elsewhere, and will return to them briefly at the end of this piece. The pertinent point is that intelligence – real or artificial – is not an objectively measurable, all-or-none attribute that currently exists only in humans and will emerge suddenly in machines one day. Rather, it is convenient label to describe a whole suite of capabilities that can be, and already are, present in animals and machines to varying degrees as a consequence of their physical structures and processes. Basically, from a modern scientific, materialistic viewpoint, animals and machines are not fundamentally different – though it is simplistic, in my opinion, to simply reduce them both to entirely conventional notions of information processing. From a mind-as-matter scientific viewpoint, the "smart" autonomous missile is not qualitatively different from the "dumb" heat-seeking missile, since both are equally at the mercy of their physical being – or embodiment – and their environments. The smart missile just has far more complex processes.
Scientifically valid as this view may be, it is not shared by most people. The problem is that there exists an unspoken mismatch between the scientific and societal conceptions of responsibility, but this issue has mainly been an academic one so far. AI algorithms are now forcing us to confront it in the real world. Since time immemorial, the human social contract has been based on the idea that people have freedom to choose their actions autonomously – with constraints, perhaps, but without compulsion. And yet, the current scientific consensus indicates that this cannot be the case; that true "free will", in the sense of being potentially able to choose action B even at the instant that one chooses action A, is an illusion – a story we tell ourselves after the fact. This follows from the materialistic view of humans, which may have room for the unpredictability and even indeterminacy of actions, but no room for explicit choice. The activity of neurons and muscles always follows only one path, so only one choice is ever made. There is no way for an individual to make "the other decision" because by that point, the individual's physical being is the decision that is already occurring. In a sense, freedom of choice lies wholly in the post facto apprehension of the counterfactual.
Of course, this is problematic from a philosophical perspective that seeks to assign responsibility, but that is a modern – even post-modern – problem. Until now, the convention has been to assign responsibility based on motivation, with the implicit assumption that human evaluators (judges, juries, prosecutors, etc.) can ascertain the motivations of other humans because of their fellowship in the same species and a common system of values grounded in universally human drives and emotions. Even here, cultural differences can lead to very serious problems, but imagine having to ascertain the motives of a totally alien species whose thought processes – whatever they might be – are totally opaque to us. Do we assign responsibility based on human criteria? Is that "fair"? Are the processes occurring in the machine even "thought"? If we follow the logic of the Turing Test and decide to accept the appearance of complex, autonomous intelligence as "true" intelligence, we have a much more complex world. Currently, we only have human-on-human violence, but once we have three more kinds – machine-on-human, human-on-machine, and machine-on-machine – how do we assign priority? Does privilege go automatically to humans, and why is that ethical? We already face some of these issues with animals, but there it has tacitly been agreed that the human is the only species with true responsibility, and privilege is to be determined by human laws – even if it occasionally ends up punishing the human (as in the case of poaching or cruelty to animals). The co-existence of two autonomous, intelligent – and therefore responsible – species changes this calculus completely, requiring us to draw a moral boundary that has never been drawn before. Having to ascertain responsibility in machines will force us to define the physical basis of human volition with sufficient clarity that it can be applied to machines. In the process, humans will need either to acknowledge their own material nature and consequent lack of true free will, or give in to dualism and deny that purely material machines can ever be truly intelligent and moral agents in a human sense. In the former case – which modern science would recommend – we would have to accept that not only logical processes, but also those most cherished human attributes of emotion, empathy, desire, etc. can emerge from purely material systems. The caricature of the machine that cannot "feel" – all the angst of Mr. Data – would have to go, to be replaced by the much more disorienting possibility that, far from being coldly calculating, truly intelligent machines would turn out to be like us, and, even more disconcertingly, that we have been like our machines all along!
For most people outside the areas of science and technology, this perspective on machine intelligence is deeply problematic – much as the idea of human evolution has been. A unified view of humans and machines strikes at the core ideas of soul, mind, consciousness, intentionality and free will, reducing them to "figments of matter", so to speak. Our moral philosophies, social conventions or legal systems may not be ready for this transition, but intelligent algorithms are going to force it upon us anyway. Some of the complexities involved are already being discussed by philosophers of law, though mainly in the context of relatively simple artificial agents such as bots and shopping websites.
The Existential Threat of AI:
The more sensational part of the alarms raised about AI are dire threats of human destruction or enslavement by machines. Steve Wozniak says that robots will keep humans as pets. Lord Rees warns that we are "hurtling towards a post-human future". Given that science considers both animals and machines as purely material entities, why are such brilliant people so concerned about smarter machines?
Perhaps part of the answer can be found by posing the question: Would we be more afraid of a real tiger or an equally dangerous robot tiger? Most people would probably choose the latter. Scientific consensus or not, we remain dualists at heart, recognizing in animals a kinship of the mind and spirit – an assumption that, in the end, they are creatures with motivations and feelings similar to ours. About the machine, we are not certain, though it would be hard to explain this on a purely rational basis. For many, the very idea that the machine could have motivations and feelings is absurd – which is pure dualism – but even those who accept the possibility in principle hesitate to acknowledge the equivalence. Most of the concerned scientists who are unwilling to trust an autonomous machine to make a lethal choice are not pacifists, and are willing to allow human pilots or gunners to make the same decision. Rees, Hawking and co. may be concerned about intelligent robots that could enslave or destroy humanity, but are far less concerned that humans may do the same. Why is that? I think that the answer lies in a natural and prudent fear of alien intelligence.
Yes, alien! Though they are our constructions, most people see machines as fundamentally different, cold, non-empathetic, mechanical – and believe that any intelligence that may emerge from them will inherit these attributes. We may prize Reason as a crowning achievement of the human intellect, but both experience and recent scientific studies indicate that, in fact, humans are far from rational in matters of choice: Much of human (and animal) behavior emerges from emotions, biases, drives, passions, etc. We recognize intuitively that these – not Reason – form the bedrock of the values in which human actions are ultimately grounded. The duel between Reason and Passion, the Head and the Heart, Calculation and Compassion has pervaded the literature, philosophy, culture and language of all human societies, and has found expression in all spiritual, moral and legal systems. Those who lack empathy and emotion are regarded as sociopathic, psychopathic, heartless and inhuman. In this framework, machines appear to lie at the hyper-rational extreme –driven by algorithms, making choices through pure calculation with no room for empathy or compassion. Where, then, would a machine's values come from? Would it even have a notion of right and wrong, good and evil, kindness, love, devotion? Or would it only understand correct and incorrect, accurate and inaccurate, useful and useless? Would it have a purely rational value system with no room for humanity? It turns out that we humans fear the hyper-rational as much as we fear the irrational cruelty of a Caligula or Hitler. But in the case of intelligent machines, it is compounded further by at least three attributes that would make intelligent machines more powerful than any human tyrant:
· Open-Ended Adaptation: As engineers, we humans build very complex machines, but their behavior is determined wholly by our design. Everything else is a malfunction, and all our engineering processes are geared to squeezing out the possibility of such malfunctions from the machines we build. Our best machines are reliable, stable, optimized, predictable and controllable. But this is a formula to exclude intelligence. A viable intelligent machine would be adaptive – capable of changing its behavioral patterns based on its experience in ways that we cannot possibly anticipate when it rolls off the factory floor. That is what makes it intelligent, and also – from a classical engineering viewpoint – unreliable, unpredictable and uncontrollable. Even more dangerously, we would have no way of knowing the limits of its adaptation, since it is inherently an open-ended process.
· Accelerated Non-Biological Evolution: It has taken biological evolution more than three billion years to get from the first life-forms on Earth to the humans and other animals we see today. It is a very slow process. And though we now understand the processes of life – including evolution – quite well, we are still not at the point where we are willing – or able – to risk engineering wholly new species of animals. But once machines are capable of inventing and building better machines, they could super-charge evolution by turning it into an adaptive engineering process rather than a biological one. Even machines may lose control of what greater machines may emerge rapidly through such hyper-evolution.
· Invincibility and Immortality: Whatever their dangers, humans and animals eventually die or can be killed. We know how long they generally live, and we know how to kill them if necessary. There's a lot of solace in that! Machines made from ever stronger metal alloys, polymers and composites could be infinitely more durable – perhaps self-healing and self-reconfiguring. They may draw energy from the Earth or the sun as all organisms do, but in many more ways and much more efficiently. Their electronics would work faster than ours; their size would not be as limited by metabolic constraints as that of animals. Suddenly, in the presence of gleaming transformers, humans and their animal cousins would seem puny and perishable. Who then would be more likely to inherit the Earth?
Ironically, as argued earlier, the idea of the intelligent machine as hyper-rational is probably just a caricature. If what we believe about animal and human minds is correct, machines complex enough to be intelligent will also bring with them their own suite of biases, emotions and drives. But that is cold comfort. We have no way of knowing what values these "emotional" machines might have, or if they would be anything like ours. Nor can we engineer them to share our values. As complex adaptive systems, they will change with experience – as humans do. Even the child does not entirely inherit the parents' value system.
The fears expressed by the critics of AI are more philosophical than real today, but they embody an important principle called the Precautionary Principle, which states that, if a policy poses a potentially catastrophic risk, its proponents have the burden of proof to show that the risk is not real before the policy can be adopted. In complex systems – and intelligent entities would definitely be complex systems – macro-level phenomena emerge from the nonlinear interactions of a vast number of simpler, locally acting elements, e.g., the emergence of market crashes from the actions of investors, or of hurricanes from the interaction of particles in the atmosphere. The "common sense" analysis on which we humans base even our most important decisions often fails completely in the face of such emergence. As a result, when it comes to building artificial complex systems or human intervention into real complex systems – such as wars, social reforms, market interventions, etc. – almost all consequences are unintended. In addition to the many "known unknowns" inherent in a complex system due to its complexity, there is an infinitely greater set of "unknown unknowns" for which neither prediction nor vigilance is possible. The warnings by Hawking et al. are basically the application of the Precautionary Principle to the unleashing of the most complex system humans have ever tried to devise. Even thinkers as brilliant as Hawking or Musk cannot know what would actually happen if true AI were to emerge in machines; they are just not willing to take the risk because the worst-case consequences are too dire. And since the complexity of true AI would forever preclude a positive consensus on its benignness, the burden of proof imposed by the Precautionary Principle can never be met.
But can such machines ever be built in the first place? How will true AI emerge? How, if ever, will the issues of intelligence, autonomy, free will and responsibility be resolved? Thinkers much more accomplished than me have engaged with this problem, and continue to do so without achieving any consensus. However, let me suggest a principle inspired by Dobzhanski's famous statement, "Nothing in biology makes sense except in the light of evolution". As we grapple with the deepest, most complex aspect of the human animal, I suggest that we adopt the following maxim: Nothing about the mind can make sense except in the light of biology. The mind is a biological phenomenon, not an abstract informational one. Focusing on the mind-as-algorithm – something to which I also plead guilty – may be formally justifiable, but it unwittingly ends up promoting a kind of abstract-concrete mind-body dualism, detaches mental functions from their physical substrate, and, by implying that these functions have some sort of abstract Platonic existence, blinds us to their most essential aspects. The mind – human or otherwise – emerges from a multicellular biological material organism with a particular structure and specific processes, all shaped by development over an extended period in the context of a complex environment, and ultimately configured by three billion years of evolution. We will only understand the nature of the mind through this framework, and only achieve artificial intelligence by applying insights obtained in this way. And when we do, those intelligent machines will not only be intelligent, they will be alive – like us!
Finally, a word on when all this might happen. Simple extrapolation over progress in AI would suggest that we are far from that time. However, progress in technology is often highly nonlinear. There are several developments underway that could supercharge the progress towards AI. These include: The development of very large-scale neural networks capable of generalized learning without explicit guidance; the paradigm of embodied robotics with an emphasis on emergent behavior, development and even evolution; much better understanding of the structure and function of the nervous system; and rapid growth in technologies for neural implants and brain-controlled prosthetics. This topic is too vast to be covered here, but one point is worth considering. It is quite possible that the transition to intelligent machines will occur through transitional stages involving the replacement or enhancement of living animals and humans with implants and prosthetics integrated into the nervous system. Once it is possible to integrate artificial sensors into the nervous system and control artificial limbs with thought, how long will it be before the militaries of all advanced countries are building cyborg soldiers? And how long will these soldiers retain their human parts as better ones become available from the factory? The Singularity may or may not be near, but the Six Million Dollar Man may be just around the corner.
Effective Altruism and its Blind Spots
by Grace Boey
There’s a bunch of reasons that might affect your answer to this question. Should you donate money, or should you volunteer time? Should you start a career in social work? What social cause resonates with you? Do you care more about animals, or army vets? How much time do various causes commit you to, and how much time do you have after meeting work and family obligations? These are common concerns that influence our charitable choices, whether we’re conscious of them or not.
Effective altruism, a growing social movement associated with philosopher Peter Singer, hopes to bring this decision-making process to the forefront of our consciousness. To be more precise: followers of the movement seek to act in the way that brings about the greatest measurable impact, given the resources they have. Effective altruism concerns itself not just with doing good, but finding the best way to do so. And according to (some) effective altruists, the best way for most people to do good is ‘earning to give’, which is exactly what it sounds like: earning lots of money, then giving it away to charity. And not just any charity—in order to maximise good, effective altruists seek to donate to the most cost-effective foundations out there.
Sounds promising? … Maybe.
I first came across the effective altruism movement in a philosophy club meeting, where one of my colleagues—an ethics professor—screened Peter Singer’s TED talk The Why and How of Effective Altruism. In the video, Singer describes the social movement and its motivations, giving numerous examples of relatively well-off people who had used their resources to do lots of good for others. I was quite encouraged and impressed by what I saw. But at the same time, something about the video made me uncomfortable. (I felt bad for thinking this, but it all seemed really smug. Another audience member in the seminar said they found most of the video self-congratulatory.)
Smugness, thankfully, does not make for an illegitimate movement. There are lots of great things about effective altruism; I agree with many of the points its followers make, and I’ve used many tools they've recommended to determine where some of my money goes. But effective altruism, I believe, suffers from big problems (yes, other than smugness). Despite being an altruist who wants to be effective, some of the movement's oversights make me hesitant to identify myself with effective altruism as it currently stands.
Effective altruism and some objections
Before delving into its limitations: a bit more about the movement. What motivates effective altruism in the first place? Peter Singer, as well as many other effective altruists, begins with the philosophical assumption that we ought to act in ways that reduce the suffering of others, just as long as doing so doesn’t involve sacrificing anything nearly as important for ourselves.
Singer famously employs a philosophical thought experiment to demonstrate why this ought to be the case:
To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond.One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Singer then argues that the ‘drowning child’ case is structurally similar to the case of global poverty:
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.
It must be emphasised that Singer and many effective altruists don’t just believe we have some kind of obligation to relieve suffering—they believe that we should continue helping until we reach a point where helping any more would cause us comparable suffering to that which we’re trying to alleviate. In other words, if you’re in a better position than others, then you ought to help reduce their suffering until all your levels of well-being are more or less equal (or, depending on what you think, until their suffering stops and their well-being reaches an ‘acceptable’ level that may or may not match yours).
So the movement emphasises giving away as much as you can, and living with as little as you can. Some followers give the majority of their income away: as of Singer’s 2013 TED talk, Toby Ord, philosopher and founder of the effective altruist organisation Giving What We Can, lives on 18,000 British pounds a year (he’s married with a mortgage), giving the rest away to charitable causes.
In practice, many who identify with effective altruism don’t really go ‘all the way’. In addition to (at least trying to) give as much as one can, a second key component of the movement that attracts many followers is the following idea: in expending resources, the altruist ought to seek the most efficient way of achieving impact in order to maximise good. In other words, effective altruism is concerned with maximising the ratio of goodness created to resources expended in the process.
In line with this objective, effective altruists and their allies devote much time to identifying charities that achieve large amounts of ‘good per dollars spent’. This is crucial, since—according to Ord’s calculations—some charities are hundreds or even thousands times more effective than others. While evaluating a charity, philanthropists often look at the proportion of donations it spends on projects as opposed to overhead costs, seeing a low proportion of the latter as a good thing. But this is a mistaken approach to the effective altruist, since spending more on projects doesn’t necessarily guarantee better results. Charities that spend more on admin can still be more cost-effective overall than ones that spend less. GiveWell, a non-profit effective altruist organisation that evaluates charities, recommends several charities per year on the basis of their marginal cost-effectiveness.
All this sounds good, but various criticisms have been made against effective altruism. Naturally, the movement is a non-starter if you don’t buy into the idea that we’re required to be altruistic at all, or even altruistic in the specific way that Singer requires. The movement happens to rest on the philosophical position of utilitarianism, which says we always ought to act in ways that maximise good—this isn’t something that everyone agrees with. But utilitarian or not, not too many people reject the idea of altruism altogether.
Other criticisms of effective altruism are practical. How accurately can we really estimate the future impact of our actions? Effective altruist organisations methodically calculate the most cost-effective causes an altruist can donate to, but realistically, such calculations at best give a short to mid-term estimate. In response to this, the effective altruist might concede that although there are many things that might go wrong along the course of our altruistic efforts, we are still obliged at any given time to act in a way that we expect will produce the best results, with the limited information that we have.
The blind spots of effective altruism
I do buy significantly into the philosophical motivations behind effective altruism, and I don’t see practical objections as reasons that we shouldn’t at least try to maximise goodness. But bigger problems make me concerned about the direction the movement seems to be taking.
As mentioned before, effective altruists have claimed that for most people, the most effective thing to do is to spend their lives ‘earning to give’. In his piece To save the world, don't get a job at a charity; go work on Wall Street, philosopher William MacAskill argues that instead of working in the non-profit sector or even volunteering, the best strategy for most effective altruists is to make lots of money, and then donate much of it to cost-effective charities. Finance can be an ‘ethical career choice’, and altruists needn’t ‘forgo the allure of Wall Street’ in order to be an effective altruist. But what if a job on Wall Street requires one to commonly engage in unethical practices? MacAskill argues elsewhere that the marginal impact of one’s unethical actions in such a career would be small compared to the donations that would come out of it, since someone else would have had your job and done them regardless.
This sort of response raises several huge red flags. For one: it demonstrates a blindness (or perhaps despondency) towards institutional causes of suffering or injustice. If one thinks that Wall Street profits from an unjust system—one that itself perpetuates large amounts of suffering—then one should be concerned with dismantling this system (either now, or eventually). But effective altruism lacks any focus on dismantling systemic injustice, and even encourages its followers to participate in unjust systems.
I’d be more willing to concede to MacAskill’s arguments if having any impact at all on institutions was an unachievable goal, and someone would always be there to ‘take my place’ in an evil system. After all, we can only do what we can. But if I am an effective altruist who earns to give, then I should aim to earn as much as I can—meaning that, if I were an investor, I should aim to be the best one I could be. Quite plausibly, some such players become so good at the game that they become market influencers with irreplaceable expertise (think Warren Buffett). Beyond influencing market movements, some players might even reach levels of political influence. What then? Perhaps this seems like too unlikely an outcome for effective altruists to bother thinking about. But I suspect it isn’t as unlikely as one might think. One needn't reach the status of Warren Buffet to be capable of making a difference.
One thing an effective altruist might do to justify perpetuating unjust systems it to point out that the suffering they are trying to alleviate is far worse. Perhaps Wall Street causes lots of suffering, but let’s take care of starvation in developing countries before worrying about that; we’ll fix it later. In theory, I am agnostic towards this line of thinking. But in practice, the global economy makes disentangling institutions and poverty very difficult, even when they exist in different countries. Activities on Wall Street commonly contribute—directly and indirectly—to suffering in developing countries.
For instance: take a successful hedge fund manager who gives most of her massive earnings to charity. It does seem pretty cool to infiltrate a conventionally selfish enterprise only to turn around and give all your profits back to the poor. But in reality, there’s a high probability that some of the stocks she invests in are of companies that contribute, directly or indirectly, to exploiting the poor. Perhaps these are even the very people she gives her money to. It is possible for an effective altruist to avoid such situations, but given the opacity of financial instruments and the demands of the market, it’s incredibly difficult. One must at least be extremely cautious.
As demonstrated by the example above, the problem of unjust institutions doesn’t just apply to the sources of an effective altruist’s income—it often extends to the end point of her donations, or the causes of the suffering she tries to alleviate. Effective altruists often focus on providing relief in forms of food, water and medicine to people in developing nations. But what about the social and political institutions that cause and perpetuate these problems in the first place? Much less attention seems to be given to taking care of that.
To sum things up, my main problems with effective altruism are the following. In cases of earning to give, effective altruism doesn’t encourage people to question the source of their income. This matters because in many of these cases, their income comes from unjust institutions, and there is a chance that one's participation is sufficient to perpetuate ill effects that would not otherwise be. Last, effective altruism lacks focus on dismantling institutions—be it the systems which cause the suffering your donations alleviate, or the systems from which you receive your income. This is a problem if, like me, you believe institutions are a huge source of much of the mess in the world, and must often be changed in order for suffering to end.
A cautionary conclusion
To me, all of the above are things the movement must recognise and improve on, and not arguments against the endeavour of effective altruism altogether. There is, after all, nothing incompatible about being an effective altruist and seeking to remove harmful institutions, if your reason for dismantling the institution is to relieve the large amounts of suffering you believe it perpetuates. One of the things I like about effective altruism is that it’s motivated by the desire not just to feel good by giving to charity, but to make sure you actually achieve lots of good. But ironically, the movement may be encouraging its followers to do this anyway—albeit in a different way. Effective altruists need to be careful that this is not the case.
One last matter: a historically knowledgeable friend of mine remarked that effective altruism, a Western movement, would do well to learn the history and politics behind any suffering it tries to alleviate overseas. Much of the poverty in developing nations is arguably a direct consequence of activities and policies that developed Western countries have engaged in in the not-so-distant past, or even engage in currently. Effective altruists run the risk of being distastefully insensitive by patting themselves on the back for donating money to developing nations, while remaining ignorant of the fact that the wealth and opportunities they enjoy share deep historical roots with foreign suffering.
So, to all effective altruists: be careful about your decisions, be aware of institutions and history... and don’t be too smug about doing good.
Mischka Henner. Coronado Feeders, Dalhart, Texas (2013).
Archival pigment print,150x180cm.
Definitely check at least the first link below to see what horror lies behind Henner's spectacular images.
The Scopes "Monkey trial", Part 1: Issues, Fact, and Fiction
by Paul Braterman
What is the purpose of this examination?
We have the purpose of preventing bigots and ignoramuses from controlling the education of the United States, and that is all.
Inherit the Wind, the prism through which the public sees the Scopes Trial, is a travesty. William Jennings Bryan, who prosecuted Scopes, was neither a buffoon nor a biblical literalist but moved by deep concerns that continue to merit attention. He did not protest at the leniency of Scopes's punishment, but offered to pay the fine out of his own pocket. Nor did he collapse in defeat at the end of the trial, but drove hundreds of miles, and delivered two major speeches, before dying in his sleep a week later. Scopes, on trial for the crime of teaching evolution in Tennessee state school, was never at risk of prison. He was no martyr, but a willing participant in a test case, actively sought by the American Civil Liberties Union (ACLU), and his subsequent career was as geologist, not school teacher. He was found guilty, quite understandably given the wording of the law. On appeal, his conviction was quashed on a technicality, bypassing the need to rule on the deeper issues, much to the dismay of his supporters. Worse; on what we would now regard as the crucial issue, whether the law against teaching evolution in State schools violated the constitutional separation of Church and State, the Tennessee Supreme Court ruled that
We are not able to see how the prohibition of teaching the theory that man has descended from a lower order of animals gives preference to any religious establishment or mode of worship.
The law prohibiting the teaching of evolution affected textbooks for a while, but its impact was fading within a decade. However, it was not repealed until 1967, when Soviet accomplishments in space were forcing Americans to examine the state of US science education. A similar law, passed in Arkansas through citizens' initiative, survived until 1968, when in Epperson v Arkansas, the US Supreme Court ruled that the prohibition on teaching evolution was based on religion and therefore unconstitutional. As for the doctrine that creationism itself is religion, not science, and therefore should not be taught in public schools, that was not established in the US courts until McLean v Arkansas,1982 and at Supreme Court level Edwards v Aguillard, 1987, Justice Scalia dissenting.
The play does not even claim historical accuracy. It was written in 1951, and the preface (free download from here, p.11) states
Inherit the Wind is not history. The events which took place in Dayton, Tennessee, during a scorching July of 1925 are clearly the genesis of this play. It has, however, an exodus entirely of its own.
Only a handful of phrases have been taken from the actual transcript of the famous Scopes Trial.
...The collision of Brian and Darrow at Dayton was dramatic, but it was not a drama. Moreover, the issues of their conflict have acquired new dimension and meaning in the 30 years since they clashed at the Rhea County Courthouse. So Inherit the Wind does not pretend to be journalism. It is theatre. It is not 1925. The stage directions set the time as "Not long ago." It might have been yesterday. It could be tomorrow.
"Could be tomorrow", in 1951, when there had been no monkey trials since 1925? Clearly, the play is not about those events in Dayton, but a comment on the anti-intellectual mob rule of the McCarthy era. Despite this, the play, and the various film versions from 1960 onwards, have shaped public attitudes to the trial and, to my mind, lamentably coarsened debate.
And yet the exchanges between the Scopes trial prosecutors, and Clarence Darrow speaking for the defence, remain as topical as ever.
Darrow, the best remembered of the defence team, was not the ACLU's choice, but they could not but follow Scopes in accepting his services, on this, the only occasion on which he offered them without a fee. He was an outspoken and abrasive agnostic, of whom the humanist Edwin Mims, Professor of English at Vanderbilt University, Nashville, Tennessee, and theological Modernist, commented, "When Clarence Darrow is put forth as the champion of the forces of enlightenment to fight the battle for scientific knowledge, one feels almost persuaded to become a Fundamentalist." In his famous cross-examination of Bryan, Darrow comes over as a condescending bully. And yet any sympathy one might feel for Bryan quickly evaporates on reading the speech he had prepared for the court, but was prevented by defence manoeuvres from delivering. Bryan's position presented defenders of science with a dilemma, to which I would dearly love to find a good resolution: one does not win over opponents by ridiculing their position and humiliating their champion, and yet what else is one to do when faced with ridiculous beliefs presented by a crowd-pleasing and truth-distorting blowhard?
This summer sees the 90th anniversary of the trial, widely regarded as an example of reason defeating obscurantism. My friend the historian geologist Michael Roberts argues, and I agree, that this popular view is damaging, as well as mistaken, and that the only long-term beneficiaries of the affair were the Flood Geology pseudoscientists, at the time of the trial itself no more than a fringe group within Young Earth creationism. What follows draws on Michael's work, and on the trial transcript, the Pulitzer Prize winning account Summer for the Gods by the lawyer and historian Edward J. Larson, and other sources.
In the early 1920s, America's churches were deeply divided between Modernists and Fundamentalists. Bryan, his once-promising political career now over, placed himself at the head of the Fundamentalist faction and its campaign to ban the teaching of evolution. The Governor of Tennessee was in favour of such a ban, but wisely recommended that the law should not specify a penalty. Even without one, it would make the State's position sufficiently clear to its teachers, whereas if it could result in criminal prosecution, it would invite the controversy of a test case. That of course is exactly what happened.
As one might expect, some universities were highly critical of the law. The University of Tennessee itself hesitated to take a position, dependent as it was on state funding for its planned expansion, but Vanderbilt University, a private institution in Nashville, Tennessee, took a clear stand in favour of evolution. There was even a proposal to bar graduates of Tennessee State schools from Columbia University, leading school Superintendent White to suggest that Dayton found its own university, named after Bryan. This happened. The Bryan College Statement of Belief maintains "that the origin of man was by fiat of God in the act of creation as related in the Book of Genesis", and since 2014 the teaching Faculty have been required to believe in the special creation of a literal historical Adam and Eve.
By 1925, when the Tennessee law was passed, the evidence for evolution was reasonably conclusive, but not yet as overwhelming as it is today. Molecular phylogeny, which places common ancestry beyond all reasonable doubt, was still decades in the future. Genetics was in its infancy, but Thomas Hunt Morgan was already working out how Mendelian inheritance, combined with mutation, could drive evolution, and these developments were referred to in Hunter's hisCivic Biology, the standard text from which Scopes had taught. The fossil record was meagre by today's standards, giving some appearance of substance to the creationist claim that Darwinism was based on extrapolation and
conjecture, rather than observation. The record of human evolution was particularly scant, depending largely on Neanderthals, Heidelberg Man, and the now discredited Piltdown Man. All of these had cranial capacities not too different from modern humans, so it was still possible to argue that there was a "missing link" between us and what we choose to call lower animals, and the fact that the crude Piltdown forgery was able to survive in the scientific literature for several decades shows how underdeveloped physical anthropology was at that time. If we had to choose a date for when the "missing link" argument lost credibility, I would suggest February 1925, just a few months before the Scopes trial, when the first Australopithecine, the "Taung Child", was described in the journal Nature. This find attracted major publicity, and the defence planned to use it in evidence.
Evolution was not the only topic that divided (and divides) the American churches. Of comparable significance was the challenge presented by the Higher Criticism, which argues that Genesis did not have a single author, but was the result of joining together two or more disparate and at times mutually contradictory texts. This view leaves room for regarding the Bible as inspired, but not for the traditional doctrine of word-for-word perfection and infallibility. Modernisers within the churches were willing to accept both evolution and textual criticism, and Fundamentalism, historically speaking, can be seen as a reaction to this Modernism.
Some legal features must be noticed, if we are to understand the trial in the context of its time. The statute specified what might or might not be taught in State universities; this would nowadays be regarded as violation of academic freedom. The constitutionality of the statute was at that time largely a matter of State, rather than Federal, law; it is now accepted that the State constitutions are fully subordinate to the freedoms guaranteed by the Federal constitution. And under the 1971 Lemon Test, a statute must not advance or inhibit religious practice (i.e. religious practice in general), and must serve a secular purpose. The Tennessee Supreme Court, in its judgement cited above, was using a much more restrictive test than this. We should also remember that Darrow and Bryan were personal friends, and had campaigned as allies on behalf of unionised labour.
Finally, and of the most enduring interest and importance, we have a conflict between two different concepts of democracy. The prosecution appealed repeatedly to the right of the majority, as the teachers' paymasters, to specify the content of their teaching. Contrast this with what I might call the principle of liberal democracy, which guarantees freedom of expression, and when it comes to the content of education requires the public to defer to expert opinion.
The facts of the case were not in dispute. Scopes had taught from Hunter's Civic Biology (the State's own prescribed textbook!), and in so doing had taught about human evolution, and broken the law. So the case was not really about this, but about the status of the law itself. The defence case would be that Scopes should not be found guilty because what he did should not be called a crime.
Nor was the outcome difficult to predict. Judge Raulston was a devout Christian. Educated at a Methodist University, he was probably not himself a Fundamentalist, but was an elected official within a Fundamentalist-leaning state. In any case, he may very reasonably have thought that the broader issues should be decided by the higher courts, rather than at district level. So he could be expected to use all his ingenuity to block the defence's claims.
The Tennessee statute, passed into law just one month after the Nature paper appeared, stated
That it shall be unlawful for any teacher in any of the Universities, Normals and all other public schools of the State which are supported in whole or in part by the public school funds of the State, to teach any theory that denies the Story of the Divine Creation of man as taught in the Bible, and to teach instead that man has descended from a lower order of animals. [Emphasis added]
Hence one prong of the defence strategy, as spelt out by defence attorney Malone:
The narrow purpose of the defense is to establish the innocence of the defendant Scopes. The broad purpose of the defense will be to prove that the Bible is a work of religious aspiration and rules of conduct which must be kept in the field of theology. [Emphasis added]
Malone is, in my view, one of the few protagonists whose reputation is enhanced by the trial. A Catholic but a divorce lawyer, himself remarried after divorce, he was a Modernist at a time when his Church was still undecided about evolution. His subsequent career involved serving as legal adviser to 20th Century Fox, and occasionally appearing in their films.
The other prong of the defence case would be to establish that the law was unconstitutional because unreasonable, since it flew in the face of the established scientific fact. So the trial involved both the main issues separating the theological Modernists from the Fundamentalists: evolution, and the proper use, by believers, of Scripture. Regarding the latter, the defence adopted the position later associated with the name of Stephen J Gould and his doctrine of "non-overlapping magisteria". To interpret the Bible literally was to fail to understand it. Science and religion could not possibly be in conflict, because they were talking about different kinds of thing. Thus the defence hoped to call as witnesses both scientific experts, and leading Modernist theologians. Also, during jury selection, Darrow took care to ask each potential juryman what he thought about evolution. Clearly, many knew nothing about the subject, strengthening the case that they should hear evidence explaining it.
For the prosecution, Bryan tried to summon opposing scientific opinion, but could not find anyone of stature willing to testify against evolution. The prosecution therefore changed its tactics, aiming instead to restrict the trial to the simple fact of Scopes's breach of the law. However, Bryan's intended closing speech, which defence tactics (see below) prevented him from delivering, was to be a broadside against evolution using all the creationist devices of quote mining, misrepresentation of fact, and claims that evolution was unbiblical, atheistic, and morally corrosive.
The defence, as we have seen, was based entirely on discrediting the law, and indeed that was the reason why the ACLU had helped arrange for the case be brought in the first place. As a result, almost all the trial, spread over eight working days, was devoted to matters of law, not fact. Was the statute constitutional? Would it be deemed unconstitutional if it violated freedom of conscience, placed restrictions on how the Bible should be interpreted, or was contrary to established science, and what kinds of evidence could be introduced to decide these questions? Since the law was a matter for the judge alone, almost all the case was heard in the absence of the jury. In addition, numerous briefs supporting the defence were never heard at all, but simply placed on record for the benefit of the appeals courts.
At the outset, the defence argued that the indictment should be quashed because the law and the indictment based on it were defective, for a mixture of reasons. The State had a constitutional duty to cherish science, and science could not be taught without including evolution. The law was contrary to the State's own establishment clause, by favouring a particular religion, and thereby violating freedom of conscience. In addition, it was so vague as to be meaningless, since it referred to what was thought in the Bible, but the Bible was open to numerous different interpretations.
Such arguments might seem strange to a reader from the United Kingdom, where Parliament is sovereign. But they are familiar in the United States, where both State and Federal Governments derive their legitimacy from written constitutions.
Hays for the defence argued that the law was intrinsically unreasonable, and therefore exceeded the policing rights of the state, as would a law against teaching that the Earth went round the Sun. "Evolution is as much a scientific fact as the Copernican theory." The State could determine what subjects should be taught but could not reasonably demand that they be taught falsely.
Attorney General Stewart for the prosecution countered that the statute was about the proper use of state funds, and therefore within the State's proper jurisdiction. The citizenry paid for their schools and therefore had a right to decide what those schools should teach. There was no violation of conscience, since Scopes was free to hold and advocate whatever opinion he chose, but that did not entitle him to propound evolution in opposition to state policy in the State's own classrooms.
The defence had, as we shall see, decided on a strategy that would prevent lengthy closing statements, usually the highpoint of a criminal trial. And so Darrow presented his strongest arguments at this point. His speech, which took two hours to deliver, was considered the finest of his career. It was witnessed by over 200 newspaperman, as well as the judge and courtroom spectators. So millions of people knew what Darrow said, but not, ironically, the trial jury.
The speech was reprinted in full in the New York Times. Space, obviously, will not allow me to do the same, so I must make do with a bald summary, and a few quotations that may convey the flavour.
The Tennessee State constitution protected religious freedom, and therefore stated that "no preference shall be given by law to any religious establishment or mode of worship." The law violated this principle, and was a law inhibiting learning. It established a specific religious standard because it gave specific status to the Bible, rather than any other sacred text. Evolution had been taught in Tennessee for years. Bryan "is responsible for this foolish, mischievous and wicked act… Nothing was heard of all that until the fundamentalists got into Tennessee." As for the Bible, it contained different accounts of creation, making the law unworkable in its vagueness. It was a book of morals, not science. The law was unconstitutional because it violated the great Jeffersonian principle of freedom of conscience, vital to a civil society.
Here, we find today as brazen and as bold an attempt to destroy learning as was ever made in the middle ages. That is what was foisted on the people of this state, that it should be a crime in the state of Tennessee to teach any theory of the origin of man, except that contained in the divine account as recorded In the Bible. But the state of Tennessee under an honest and fair interpretation of the constitution has no more right to teach the Bible as the divine book than that the Koran is one, or the book of Mormons, or the book of Confucius, or the Budda, or the Essays of Emerson, or any one of the 10,000 books to which human souls have gone for consolation and aid in their troubles.
The Bible is a book primarily of religion and morals. It is not a book of science. Never was and was never meant to be. They thought the earth was created 4,004 years before the Christian Era. We know better. I doubt if there is a person in Tennessee who does not know better. They told it the best they knew. And while science may change all you may learn of chemistry, geometry and mathematics, there are no doubt certain primitive, elemental instincts in the organs of man that remain the same, he finds out what he can and yearns to know more and supplements his knowledge with hope and faith. That is the province of religion and I haven't the slightest fault to find with it.
My friend the attorney-general [prosecuting] says that John Scopes knows what he is here for. Yes I know what he is here for, because the fundamentalists are after everyone that thinks. I know why he is here. I know he is here because ignorance and bigotry are rampant and it is a mighty strong combination, your honour.
The state by constitution is committed to the doctrine of education, committed to schools. It is committed to teaching and I assume when it is committed to teaching it is committed to teaching the truth.
Can [the legislature] say to the astronomer, you cannot turn your telescope upon the infinite planets and suns and stars that fill space, lest you find that the earth is not the center of the universe. Can it? It could - except for the work of Thomas Jefferson, which has been woven into every state constitution of the Union, and has stayed there like the flaming sword to protect the rights of man against ignorance and bigotry, and when it is permitted to overwhelm them, then we are taken in a sea of blood and ruin that all the miseries and tortures and carion of the middle ages would be as nothing.
If today you can take a thing like evolution and make it a crime to teach it in the public school, tomorrow you can make it a crime to teach it in the private schools, and the next year you can make it a crime to teach it to the hustings or in the church. At the next session you may ban books and the newspapers. Soon you may set Catholic against Protestant and Protestant against Protestant, and try to foist your own religion upon the minds of men. If you can do one you can do the other. Ignorance and fanaticism is ever busy and needs feeding. Always it is feeding and gloating for more. Today it is the public school teachers, tomorrow the private. The next day the preachers and the lecturers, the magazines, the books, the newspapers. After a while, your honor, it is the setting of man against man and creed against creed until with flying banners and beating drums we are marching backward to the glorious ages of the sixteenth century when bigots lighted fagots to burn the men who dared to bring any intelligence and enlightment and culture to the human mind.
The judge was having none of it. In a ruling slightly longer than Darrow's speech, he gave his opinion that the law was perfectly clear, and legitimate in its scope. The offence consisted in teaching that man was descended from a lower order of animals, and the references to evolution and the Bible merely provided additional context. Later on in the trial, he was to rule on more or less the same grounds that evidence concerning evolution, and about different ways in which the Bible could be interpreted, were beside the point.
The judge no doubt intended his point by point rebuttal of the motion to quash to be dramatic. Unfortunately, before he delivered his ruling, it had already been published in the newspapers. He was furious and ordered the assembled pressmen to trace the source of the leak. They had little difficulty. The source was Judge Raulston himself. One reporter had asked him, with affected casualness, whether the case would be resuming directly after he delivered his opinion, and he had said that it would. But if he had accepted the motion to quash, there would have been no case left to resume.
The defence next quoted the Governor himself as having said that the law was consistent with the existing States textbooks, would not put Tennessee's teachers in any jeopardy, and would probably never be applied. In response, the judge quite correctly pointed out that under the American doctrine of separation of powers, the Governor as head of the executive branch had no right to impose his own interpretation on the law, this being the role of the judiciary. He also ruled that expert evidence concerning evolution, and about different ways in which the Bible could be interpreted, were irrelevant and inadmissible, but allowed the defence to place such evidence in the trial record for the benefit of the appeals courts.
In my next post, I will describe this inadmissible evidence, Darrow's famous dialogue with Bryan, Bryan's intended closing speech and why it was not delivered at the trial (although Bryan did deliver two very similar speeches in the days immediately following), how the case was settled, and subsequent legal battles. I will also give my own view on who won, who lost, the extraordinary errors of judgement displayed by both the main protagonists, and the implications for us today.
1] The trial transcript and related documents are freely available as PDF photocopy (readable but not suitable for cut-and-paste). In addition to these, and Michael's account, I have used that given by the constitutional lawyer Douglas Linder (Professor at University of Missouri Kansas City Law School) here. . The fullest account, however, is by the lawyer and historian Edward J. Larson, whose Summer for the Gods earned a Pulitzer Prize. I have also used other sources, such as Ronald Numbers' authoritative study, The Creationists; Numbers has also posted much of his research on line here, as part of the Counterbalance science in context project. I acknowledge special help from Alastair Arthur, of Glasgow University Library Services.
2] Full text at http://darrow.law.umn.edu/documents/Scopes%202nd%20day.pdf p. 74 on. Here, I have for ease of reading omitted ellipses, and added some half dozen words for continuity.
Dayton courthouse courtesy Michael Roberts. Darrow by Mobius, public domain. Taung Child image by Didier Descouens via Wikipedia.
The Magical Dimensions of the Globe
by Charlie Huenemann
There’s a particularly good episode of Doctor Who (“The Shakespeare Code”) wherein the Doctor and Martha visit Shakespeare and save the world from a conspiracy of witches. The witches’ plan is to take possession of Shakespeare and force him to write magical incantations into the (now lost) play Love’s Labours Won. (It’s not really magic, of course, but some quantum dynamical dimension of psychic energy… well, whatever.) When the play is then performed in the Globe Theater and the psychic words are spoken, a transgalactic portal will open up, through which an entire population of witches - really, in fact, members of an alien species known as the Carrionites - will march through and take over the world. Luckily, the Doctor is wise to the plans, and he and Martha improvise a counter-spell on the spot and disaster is thereby averted.
It’s crucial to the plot that the witchy words be spoken in the Globe, because the witches had previously forced its architect to frame the theater according to magical dimensions: fourteen symmetrical walls into which some sort of string-theoretic alchemical pentagram might be interpolated, or something like that. The point is, the layout of the place is critical for the magic to do its work.
I have recently been reading Frances Yates’ classic work of history, The Art of Memory (1966), which suggests that this latter point may not be so far fetched. Yates was a formidable scholar of the European renaissance, and her rich book details a strain of magical thinking about how the art of memory can bring a soul into harmony with the deep nature of things.
The art of memory goes way back. Ancient authors like Simonides, Quintilian, and Cicero recommend using vivid images to help recall anything from lists of names to long passages from speeches. Images drawn from mythology, or the zodiac, or well-known public monuments like the Parthenon might all be employed creatively as mnemonic devices.
You know the sort of trick: if you’re trying to remember the books of the Bible, imagine a pair of jeans (Genesis), hanging from the doorknob of an exit (Exodus), where there’s a panda bear levitating (Leviticus), etc. Of course, the ancient memory devices were all much more decorous and ambitious than this. The ancient wizard of memory Metrodorus divided the belt of the zodiac into 360 degrees with memorable images, which he re-purposed for memorizing all manner of things.
Fast forward to the Renaissance, when the mnemonic pictures themselves took on far greater significance as talismanic images. According to a loose tradition that includes such stars as Marcilio Ficino and Giordano Bruno, ornamenting one’s memory with astrological and mythological imagery meant refurnishing one’s soul in such a way as to mirror the mystical architecture of the world. In this line of thought, the fanciful emblems people use to remember stuff should be not just arbitrary images, but mystical archetypes that draw out the astral forces that bring structure to both the individual soul and to the world.
Giulio Camillo (1480-1544), for example, took a theater design from the ancient Roman architect Vitruvius and refashioned it into a virtual memory theater, replete with mythological imagery. The operator stands on stage, looking out upon the theater, like Professor Xavier in the Cerebro, and surveys the world’s deep structure. Yates’ description conveys the idea:
Camillo’s Theatre represents the universe expanding from First Causes through the stages of creation. First is the appearance of the simple elements from the waters on the Banquet grade; then the mixture of the elements in the Cave; then the creation of man’s mens [mind] in the image of God on the grade of the Gorgon Sisters; then the union of man’s soul and body on the grade of the Pasiphe and the Bull;....
And if we go up the Theatre, by the gangways of the seven planets, the whole creation falls into order as the development of the seven fundamental measures….
The whole theater is apparently laid out like a periodic table of the archetypal universe. Camillo confusedly tried to explain the idea behind the memory theater to a skeptical Viglius Zuichemus, who in turn related what he could to his even more skeptical friend, Erasmus of Rotterdam. “He calls this theatre of his by many names, saying now that it is a built or constructed mind or soul, and now that it is a windowed one. He pretends that all things that the human mind can conceive and which we cannot see with the corporeal eye, after being collected together by diligent meditation may be expressed by certain corporeal signs in such a way that the beholder may at once perceive with his eyes everything that is otherwise hidden in the depths of the human mind. And it is because of this corporeal looking that he calls it a theatre.”
Camillo’s theater was actually built for the king of France sometime in the late 16th century, though it disappeared not long after that. (That’s memory for you.) But anyway, the overall point of Camillo’s theater was to present a structure that, when viewed, renders one’s own mind a microcosm of God’s macrocosm.
Speaking of microcosm and macrocosm, we come now to the English physician and occultist Robert Fludd (1574-1637), a like-minded brother in these mystical arts, who followed up on Camillo’s memory theater with one of his own. In his Metaphysical, Physical, and Technical History of the Two Worlds, the Greater and the Lesser (c.1617), Fludd describes a theater that can be used to model the secret structure of the cosmos. But Fludd is careful to stress that the model will work best as an aid to memory if it is based on an actual, physical being, a real one that the reader can actually visit and subsequently remember without difficulty.
And now what theater would that be? That’s right: Yates argues that the theater Fludd goes on to describe is in all likelihood the Globe. (The second Globe, that is, as the first one burned to the ground in 1613.) This by itself is very interesting, since we do not have much evidence for putting together a picture of what the theater that saw the openings of Shakespeare’s greatest plays really looked like. If Yates is right, Fludd’s description gives us a great deal more to draw upon.
But Yates goes on to explore an even more intriguing possibility. It is not implausible that the architects of the Globe were well-acquainted with the ancient architectural works of Vitruvius. Moreover, they would have known them in the mystical grab in which those works had been draped by Giulio Camillo and the English Hermetic philosopher John Dee. If so, then those who designed the Globe would have been “skilled in the subtleties of cosmological proportion,” and they might well have dedicated these skills to their work. Perhaps they put together a theater that itself presented the frame of the universe to all those who entered, watched, and played within.
“All the world’s a stage,” indeed. And in this case, the stage is also all the world, framed now not by interstellar witches, but through the magical thinking of Renaissance philosophers. We need only the experience of theater-goers to confirm all this magic: in places such as the Globe, the right words, said in the right order, can open portals to distant galaxies, as well as to “everything that is otherwise hidden in the depths of the human mind.” That's magic.
In praise of footpaths
by Emrys Westacott
As an expatriate Brit who has lived in North America for many years, I have sometimes been asked what I miss most about the old country. There's plenty to miss, of course: draught bitter; prime minister's question time; red phone boxes; racist tabloid newspapers; Henderson's Yorkshire Relish; gray rainy afternoons, especially at the seaside in July. But my answer is always the same: I miss the footpaths.
I was reminded of this once again this summer when I made my biennial trip back to Blighty. For one week of the trip a small family group rented a house in Derbyshire (my home county) and spent most days hiking around various parts of the Peak District, the marvelously varied and beautiful national park that sits inside a great horseshoe of urban sprawl running South from metropolitan Manchester in the West, through the Potteries in Staffordshire towards the Birmingham, East towards Derby and Nottingham, and then back up North towards Sheffield.
The weather wasn't always great–no surprise there: we are, after all, talking about England in July–but for hiking it was fine: not too hot, and with the occasional shower to freshen things up. But there are two things that make walking in the British countryside so enjoyable: the infinitely interesting landscape; and the great network of footpaths that allow you to walk from anywhere to anywhere by a dozen different routes. Plus the fact that if you plan things right you can end your walk at a tea shop where you can get a pot of tea with a scone, raspberry jam, and clotted cream. (OK, that's three things.) Or at a pub. (four)
Two thousand years ago most of Britain was covered with trees. Over time the land was deforested as people used wood for fuel and construction and opened up land for grazing cattle and sheep. As a result the rural landscape today in places like Derbyshire has an open character, a combination of fields, small woods, grassy hills, and heather-covered moorland. This means that the topography of the region is more revealed, and revealing, than in places where forest dominates the landscape: the rocks, cliffs, streams, gullies, and ground vegetation are not hidden behind or beneath a dense covering of trees.
Human beings have lived, loved, worked, prayed, fought, and died in these parts for a very long time, and the history of their doings is inscribed in the landscape. The dry stone walls that bound the fields represent millions of hours of back-breaking labour over many centuries. Derelict stone cottages and famous stately homes date back to medieval times. Village churches, some of them with Norman or even Saxon features, overlook graveyards where the oldest tombstones have had their inscriptions weathered into oblivion. There are paths that follow the course of Roman roads. And going even further back, there are Bronze Age stone circles and tumuli (burial mounds) created roughly 4000 years.
Enhancing the interest of walking through such historically rich and naturally beautiful countryside are the wonderful large scale Ordinance Survey maps. Two and half inches to the mile, these show every road, track, path, river, stream, pond, church, post office farmhouse, barn, and sheep pen,. They show you whether an area is wooded, open, or marshy, and also the contours of the land, so you know exactly how steep any ascent or descent will be. They even show you the layout of all the dry stone walls, information that proves invaluable when you've lost the footpath and are trying to work out exactly where you are.
And it is the footpaths, more than anything, that render the countryside so accessible and gives walking in Britain its unique flavor. Some of them are popular, clearly signposted, well-maintained trails that follow the course of old packhorse routes or disused railway lines. Others are almost invisible byways through fields or over hills, evidenced by little more than an easily overlooked mossy sign and a meandering line of trodden grass, barely discernible even to the experienced eye.
But the great thing about these footpaths, apart from their profusion, is that most of them are public rights of way. So while the land is for the most part privately owned, primarily by farmers, anyone and everyone enjoys access to it. You can't just go wherever you please. Walkers have to stay on the paths to avoid damaging crops, walls, or fences. But since the farmers have an obvious interest in not having hikers blundering around their farms causing accidental damage, they are generally pretty good at making sure the paths are clearly marked.
A more complete sort of access is granted to the high moorland areas found mainly to the North in the area known as the Dark Peak. A century ago there were regular confrontations between ramblers who wanted to enjoy walking across the moors and gamekeepers who were employed by the landowners to deny access. (The landowners used the moors mainly for grouse shooting.) In 1932 the Ramblers Association organized a mass trespass across Kinder Scout, a high moorland plateau between Manchester and Sheffield. It turned out to be a highly successful act of civil disobedience. Access to large areas of uncultivated land was negotiated, and the public's "right to roam" over these areas in England and Wales was extended and consolidated in the Countryside and Rights of Way Act 2000.
In Western New York, where I now live, the scenery is lovely and probably about as English-looking as anything you'll find in North America. The mixed forest is more extensive, and the Fall colours are more brilliant, but among the rolling hills there are plenty of small farms surrounded by cow pastures. The main obvious differences are the absence of sheep, the use of fences around fields rather than stone walls or hedges, and the character of the rural architecture, most buildings being made of wood rather than brick or stone.
But the biggest difference for me is one that is almost invisible yet makes all the difference. There are few footpaths. A few, yes, but not many. One can't walk from one village to another by six different off-road routes. For the most part, the only way to get from one place to another is along roads. So when locals go for constitutional walk, they typically stick to the roads. And going hiking typically consists of driving to a specific location such as a state park and following a designated trail. These walks are very pleasant, and many take in spectacular natural features such as gorges and waterfalls.. But they are limited in number.
Sadly, it ‘s not easy to imagine this changing any time soon. The footpaths of England, and the rights of way they enshrine, are ancient. In countries like Finland and Norway, what is known as "everyman's right," the right to walk across privately owned land provided one does no harm, is similarly a right that people have enjoyed from time immemorial. But in a relatively young country like the United States, where the founding notion of freedom was intricately bound up with the notion of private property, establishing public access to private land is difficult. The prevalence of an individualistic ideology is also an obstacle. In many minds, one person's freedom to declare a piece of land they own off limits to everyone else takes precedence over the freedom of millions to enjoy access to that land and its beauties. Various organizations that seek to expand outdoor recreational opportunities work on expanding public access, but the process is complicated and arduous.
Of course, before the European settlers imported the notion of private land ownership, the problem of public access to land never arose. To restore that right of access would mean putting history into reverse. It's one of those instances where progress requires us to undo what passes for progress.
by Brooks Riley
The Lunch Box
by Mathangi Krishnamurthy
On a plane ride to Mumbai last week, I bought oatmeal cookies. For a fleeting second, I thought about sharing them with my surly co-passenger, who had been looking straight ahead ever since occupying the middle seat right next to my windowed one. If I had a middle seat, I might be surly too. I thought the cookies would help. But then the thought remained just that, fleeting. In the sum total of a minute, I played in my head the awkwardness of first contact, the shaking of head by my equally awkward interlocutor, and then my consequent retreat into the "I told you so" shell. Having successfully pre-empted my unnecessary state of embarrassment in the world, I proceeded therefore to not offer him a cookie. And there in that one stroke, I become part of a world full of strangers shedding candy.
As children, my friends and I were taught to share food. Every morning, we set off from home groggy-eyed and heavy-footed with our variously colored backpacks stuffed with notebooks, pencils, and lunch bags with food, water, and sometimes, a lonely apple or banana. So armored, we set off to face the universe.By the time lunch-time came around, we were all in states of feverish excitement, trying to anticipate our own and others' lunch choices. Some of us were the steady kinds, bringing rice, vegetables, and dal. The others brought home and regional specificities; idlis and dosas, parathas, curd rice and lemon rice, gossamer thin rotis, once crisp puris but now soggy with the long wait for lunchtime, chutneys of various persuasions (coconut and mint and tomato), and those objects of much desire, bread rolls stuffed with spicy potato curry. The trendier homes sent sandwiches. In an age where our collective imagination was colonized by a rural Enid Blyton-esque England of wafer thin cucumber sandwiches and strawberry jam scones, this was definitely cosmopolitan. Small matter that I did not think jam was all that great. I nevertheless begged my hapless mother who was up at the crack of dawn to knead dough for the wonderful potato parathas I carried, to instead make me every other thing the other children brought. She did no such thing. So I scoffed down their sandwiches, and others ate my idlis and parathas.
All classroom politics and judgements solidified during lunchtime. Who shared with who, who did not, whose lunchbox was emptied the soonest, and whose was the humble steel tiffin as opposed to bright and matte-finished plastic ware. A few years ago, I espied the same steel tiffins many dollars marked up, at the Whole Foods mother ship in Austin, Texas, and guffawed. Sometimes, at lunchtime, we invited the teachers to come share food with us. A few always obliged. Sometimes I try and recall the delicious pleasure of having been able to bring about that coup; the breaking of borders between the classroom and the teachers' room. Many years later at our first job, my single friends and I, lived far from home, and were none of us industrious or brisk enough to make lunches. So we relied on the munificence of colleagues' parents, who generously sent extra food our way. Morsel by greedy morsel, we discovered other regional specialties; flaky sugared flatbread and jaggeried mangoes, delightfully puffy rice, and curries full of peanut powder. In the midst of deadlines, screaming bosses, and capricious clients, there was always lunchtime. In a recent movie titled The Lunchbox, the camera spends large amounts of time focused on the protagonist Irrfan Khan savoring each layer of his mistakenly delivered lunchbox; and proceeding to eat it all. In the midst of a deadening job and a morbidly quiet life, the lunchbox comes into his life with reminders of connection, and love, and flavor.
My favorite lunch boxes have been those we packed for travel; train journeys to be precise. The night before would emerge all our motley stainless steel boxes and thermos flasks. Washed and laid out on the dining table, they would gleam even as the family sat around making decisions on the food decisions which would carry us through the next twenty-four hours (or twenty six or twenty eight). One thermos of buttermilk, tart and bursting with cumin, coriander, and ginger; a box filled to the brim with small cut potatoes roasted in pepper, and salt; and finally one packed with idlis rolled in chili and coriander powder or gunpowder, held together with pungent Indian sesame oil. There was always the mandatory curd rice container with mango pickle.
Other small boxes held snacks, and what we called time-pass food; there was a lot of time to pass on the train. These were most often purloined from our snack cupboard of banana chips, potato chips, and biscuits. Excitedly the next morning, we'd all board the train with our own foods on offer even as we surveyed the compartment to look at our companions and their food bags. A twenty-four hour journey involved at least three meals, and every few hours, all the boxes came out and were circulated buffet-like.
As I write this, I also realize that I no longer ever make myself a lunchbox, to work, or for travel. Relying on wayside stores, packaged food, canteens, overpriced airport stalls, digestive medicines, and perhaps even a capacity to allay hunger, I leave home footloose and fancy free. I cannot, however, hold back an envious stare, when I see a competent co-passenger pull out Houdini-like from the innards of a tiny carry-on bag, gleaming steel tiffins and zip-locked treats. Nobody ever offers to share though.
For many long years, I used to view the world through nostalgia and what if's and continued to be absent. Now I gaze fondly at the past and look at old photographs in a way reminiscent of Faiz and old love, as necessary loss. Now I practise presence. Like the untidy nest built by the little bird on the ceiling of our tiled roof, it is a messy prospect. It asks one to let go, and to embrace the new world and its habits, and march forward, and yet maintain memory. But I like sharing food, and neat bento-box like lunch boxes, and magical travel treats. I do think I should try and make myself a lunch box today.
Can free speech survive the internet?
by Thomas R. Wells
The internet has made it easier than ever to speak to others. It has empowered individuals, allowing us to publish our opinions without convincing a publishing company of their commercial value; to find and share others' views on matters we concern ourselves with without the fuss of photocopying and mailing newspaper clippings; and to respond to those views without the limitations of a newspaper letter page. In this sense the internet has been a great boon to the freedom of speech.
Yet that very ease of communication has brought problems of its own that may actually limit the freedom part of free speech, the ability to speak our mind to those we wish without fear of reprisal.
The first problem is that what was once a difficult endeavour – to bring our words to the attention of others – is becoming difficult to avoid. An increasing amount of speech and its proxies, such as the expression of preferences, is subject to automatic publication to the world. If not by us, if we are very careful with all our privacy settings, then by the devices and apps of those we talk to. It is becoming hard to guarantee a private conversation.
That matters because the way one expresses oneself in conversation, to specific people, is not how one sets out one's thoughts to the world, when one is trying to reach and impress strangers with one's ideas. The old difference between speech and publication, and all the pains publication required, respected that distinction.
Speech is extemporary. It is often part of an ongoing relationship in which the parties know each other and have a common knowledge and context to relate to. It may be experimental in style and content, especially between people who know each other well, reflecting not your settled views but ideas you are curious about and phrasings your want to try out. There are often bad jokes and failed lines of reasoning and backtrackings, and this is normal and forgivable because everyone understands that conversation is dialectical, an attempt to make progress together. In persuading another it is normal to reach for the ad hominem approach, to adapt your arguments to the capacities, inclinations, and beliefs of those one is talking to.
Publication in contrast is – or was – a distinct and daunting undertaking, requiring much diligence and prudence in framing a particular expression of your ideas that may stand the test of the scrutiny of all sorts of readers without your being able to step in to explain what you meant.
The same form of words must do the job of communicating to everyone and hence it must be written for no one in particular. Furthermore, the conversation about it may take place outside your purview, and among readers inclined to rationalise their instinctive antipathy to your ideas without the goodwill of the typical conversation partner. As Plato has Socrates put in Phaedrus,
"And when [speeches] have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves."
When all speech becomes a publication, or at least it is increasingly difficult to guarantee that it will not, one's casual remarks will be set before the world and may be judged by anyone. The effect of this, I fear, will be what we have already seen in politicians (except Trump, of course) who over the last 20 years have become increasingly careful about what they say because any gaffe can and will be stripped of its context and used to humiliate and destroy them, perhaps years later. Hence the general complaint that politicians sound like robots and never venture spontaneous remarks or seem to engage fully with the people they are talking to. They don't dare.
As has been shown by recent scandals involving Nobel biologists and more ordinary folks like the poor woman who tweeted a bad joke about AIDS before getting on a plane to Africa, we are all of us now in the same boat as the politicians, one failed joke away from pariahdom and unemployment.
The problem is compounded by the global character of internet publication. Previously, publication usually came with an audience. If one was publishing in a magazine, for example, one would have some idea of the interests and views of its few thousands of readers and how to talk to them. Books were trickier, but at least one knew in which country they would be sold. Nowadays one cannot predict at all where one's words will end up. Even language doesn't seem to limit their reach. So a Danish newspaper publishing cartoons of Mohammed to make a local political point winds up being the subject of moral disgust and protest half a world away, bringing its whole country into disrepute and perhaps danger. So a respectable Dutch broadsheet newspaper draws the righteous indignation of Democrat America, and the editor's only defence is the outdated "The n-word is an English word and has a less offensive meaning in Dutch. We hadn't thought that it would be read in the US."
These days it is not enough to consider how your words will appear to the people you would like to read them. You must bear in mind that anyone at all might discover them, share them with like-minded souls via social media, and hold you answerable to their moral standards. Efficient search engines also play a role by making it easy to search for offensive references to yourself and what is dear to you that you might otherwise never have found out about. For example professors or students who google their names may find awful and even hate-filled conversations about them on student chatrooms, like this one, and feel desolated and even fearful as a result. The increased capability for discovering insults means that you must expect that the very kind of people you would least like to have read your words are the very people most likely to find them. You may be liable to legal sanctions or a twitter shame mob.
The second problem is that the internet makes it much easier for individuals to use speech as a weapon, and not only the assholes looking to offend, by posting videos like the Innocence of Muslims or barraging women in public life with rape threats.
Although hardly anyone cares that you said something offensive, or something that a strained reading could find offensive – apparently in the Nobel biologist's case only one person in the audience chose to interpret his remarks literally - a few people will care, and social media allows them to come together to concentrate their moral outrage. The shame mobs that spontaneously form may only number in the tens of thousands but they nonetheless exercise enormous collective power at the costs of a few seconds of their attention and a couple of clicks. Humans are social animals after all. We aren't built to withstand a tsunami of personal abuse, which may become a news item in its own right that Google will forever associate with our name. Not to mention that many of those righteously indignant people will thoughtfully turn their attention to your employer and publicly demand to know why someone as evil as you hasn't been fired. The internet allows ordinary people to exercise 18th Century mob justice in a 20th Century way, destroying people's lives bloodlessly from their mobile phones thousands of miles away.
The internet has enhanced our free speech in an imbalanced way: greater ease of reaching others with our speech has come at the expense of our freedom to speak without fear of reprisal. Full scale shame mobs are still rare or course, but that is not all we have to fear. Employers googling you before job interviews may write you off on the basis of some ill-considered outburst you made on twitter years ago, the kind of ugly remark that might once have existed for only a few moments between friends at a bar after work, before being kindly forgotten by those who knew it wasn't the real you talking. Likewise the guy you asked out on a date or the parents of the children you teach may judge you by the single worst thing you ever said in the mistaken belief that it represents who you are.
We are going to have to adapt ourselves psychologically and institutionally to our new powers of speech. There are several approaches we might take. None of them are very attractive, but perhaps some combination will be arrived at that isn't so bad.
First, we could learn to censor ourselves, and teach our children from a very early age, to never say anything that we wouldn't be happy for the whole world to read in the length of a tweet. Certainly we should be more careful what we say, out of care for others as well as prudence. But the full internalisation of responsibility for everything we say it would be the end of liberalism.
Speech is protected because it is intimately linked to thought – speaking is thinking together. Liberalism starts from respect for the autonomy of the individual to form their own opinions on the right and the good. If we fear to think aloud because we fear the wrath of some part of society then we are not free to form our own opinions anymore. The fact that it isn't the government that tyrannises over us is irrelevant.
Second, we could each try to develop the shamelessness of a Donald Trump, who treats internet infamy as a game he can win and seems to actually prosper from the outrage he inspires. A society of Trumps though is a rather depressing prospect. In any case, just because you're not ashamed of that silly twitter joke doesn't mean you won't lose your job.
Third, we could call on the government to save us from the mob, for example by restricting how search engines deliver the results of personal name searches; installing fire-breaks in social media networks that make the formation of flash mobs less likely; making it illegal to fire the target of a shame mob without cause and due process; regulating social media networks to make it easier to keep private conversations from becoming public publications; making it easier to sue twitterers for defamation and threatening messages, and Twitter for enabling it; and so on. This is censorship in the name of freedom of speech, which is problematic in principle, and, given the international character of the internet, would also be problematic in practice.
Fourth, society could reconfigure its thresholds for moral disgust and indignation. It is trivially easy to find something on the internet that will outrage you to your core. But perhaps we will eventually adapt to the lower level of privacy and hence the lower quality of people's thoughts that are being published to the world. Perhaps we will also come to recognise that existing in a constant state of outrage about what strangers on the other side of the world are doing isn't all that great a way to live and neither does it do anything to make our society better. Perhaps we will use our new powers of speech to talk about that instead.
Sunday, August 30, 2015
Restoring Henry Kissinger
Michael O'Donnell in Washington Monthly:
In 1940 the young Henry Kissinger, caught in a love quadrangle, drafted a letter to the object of his affections. Her name was Edith. He and his friends Oppus and Kurt admired her attractiveness and had feelings for her, the letter said. But a “solicitude for your welfare” is what prompted him to write—“to caution you against a too rash involvement into a friendship with any one of us.”
I want to caution you against Kurt because of his wickedness, his utter disregard of any moral standards, while he is pursuing his ambitions, and against a friendship with Oppus, because of his desire to dominate you ideologically and monopolize you physically. This does not mean that a friendship with Oppus is impossible, I would only advise you not to become too fascinated by him.
Kissinger disclaimed any selfish motive for writing, loftily quoted from Washington’s farewell address, and regretted with some bitterness Edith’s failure to read or comment on the two school book reports he had sent her. Would she please return them for his files?
It is unfair to judge a man’s character by a jealous letter that he drafted (and did not send) at age sixteen. Yet here, to a remarkable extent, is the future nuclear strategist, national security advisor, and secretary of state. The reference to Edith’s attractiveness bespeaks the charm and flattery for which Kissinger would become famous. Secrecy and deceit are present also: he went behind his friends’ backs and coyly advised against a relationship with “any one of us,” which of course really meant the other guys. By trashing his buddies in order to get a girl, Kissinger displayed ruthlessness. The letter is written in what Christopher Hitchens memorably described as Kissinger’s “dank obfuscatory prose,” which relies on clinical-sounding phrases like “dominate you ideologically.” And, of course, the letter betrays vanity. How could anyone fail to be dazzled by his book reports!
Flamed but Not Forgotten: On Jonathan Franzen’s ‘Purity’
Lydia Kiesling in The Millions:
There are a few digs at you, reader, in Purity, Jonathan Franzen’s big new novel. Here’s one buried in the musings of Andreas Wolf, the sociopathic leader of a data-dumping transparency project — one analogous to but at odds with WikiLeaks: “The more he existed as the Internet’s image of him, the less he felt like he existed as a flesh-and-blood person. The Internet meant death.” Have you read a take or a tweet excoriating Jonathan Franzen? You inhabit a world “governed…by fear: the fear of unpopularity and uncoolness, the fear of missing out, the fear of being flamed or forgotten.”
Ironically, the Internet — the thing with which Franzen’s opprobrium is most frequently associated — is also the vehicle by which his utterances become collectively memorable. The Internet is why I know, for example, that 20 years ago, Franzen expressed anxiety about cultural irrelevance in the type of tone-deaf revelation primed to annoy less-famous writers and destined to become characteristic: “I had already realized that the money, the hype, the limo ride to a Vogue shoot weren’t simply fringe benefits. They were the main prize, the consolation for no longer mattering to the culture.”
No one should be permanently lashed to his or her remarks of decades past, but Franzen, with his frequent public grumping, invites a certain amount of scrutiny. And despite the easy prey of Franzen’s Vogue shoots, that essay, “Perchance to Dream,” published in Harper’s in 1996, contains an artist’s statement that remains the tidiest, most cogent thesis on the project of Franzen’s writing: “It had always been a prejudice of mine that putting a novel’s characters in a dynamic social setting enriched the story that was being told; that the glory of the genre consisted in its spanning of the expanse between private experience and public context.”
Subatomic particles that appear to defy Standard Model points to undiscovered forces
Hannah Osborne in Yahoo! News:
Subatomic particles have been found that appear to defy the Standard Model of particle physics. The team working at Cern's Large Hadron Collider have found evidence of leptons decaying at different rates, which could possibly point to some undiscovered forces.
Publishing their findings in the journal Physical Review Letters, the team from the University of Maryland had been searching for conditions and behaviours that do not fit with the Standard Model. The model explains most known behaviours and interactions of fundamental subatomic particles, but it is incomplete – for example it does not adequately explain gravity, dark matter and neutrino masses.
Researchers say the discovery of the non-conforming leptons could provide a big lead in the search for non-standard phenomenon. The Standard Model concept of lepton universality assumes leptons are treated equally by fundamental forces.
They looked at B meson decays including two types of leptons – the tau lepton and the muon, both of which are highly unstable and decay within just a fraction of a second. The tau lepton and muon should decay at the same rate after mass differences are corrected. But the researchers found small but important differences in the predicted rates of decay.
America’s Self-Inflicted Wound in Syria
Frederic C. Hof in Foreign Policy:
On Aug. 16, Syrian regime aircraft bombed a vegetable market in the rebel-held Damascus suburb of Douma, slaughtering over 100 Syrian civilians and wounding some 300 more. Many of the victims were children; it was one of the deadliest airstrikes of a brutal war. This is far from the first regime-committed atrocity in a Damascus suburb: Exactly two years ago today, Bashar al-Assad’s forces launched a chemical weapons attack in Ghouta, which killed hundreds. In the case of the Douma attack, President Barack Obama’s administration reacted with its usual pantomime of outrage: strong verbal condemnation, condolences for the families of victims, and a plea that the international community “do more to enable a genuine political transition in Syria.”
A genuine political transition in Syria, however, is not right around the corner. Yet every airstrike by President Bashar al-Assad’s regime is fueling radicalization in the Syrian here and now. The only clear winner in the Douma abomination was the pseudo “caliph” of the so-called Islamic State, Abu Bakr al-Baghdadi, a hardened criminal who recruits followers courtesy of the Iranian-sponsored Assad regime’s atrocities and Western complacency. Iran and Assad know exactly what they are doing by bolstering this evil. The West, meanwhile, is complacently unresponsive.
Oliver Sacks, RIP
Oliver Sacks has died. As my friend John Ballard has said, "He taught us how to live and die gracefully." John also sent me this article by Sacks which appeared in the New York Times a couple of weeks ago:
In December 2014, I completed my memoir, “On the Move,” and gave the manuscript to my publisher, not dreaming that days later I would learn I had metastatic cancer, coming from the melanoma I had in my eye nine years earlier. I am glad I was able to complete my memoir without knowing this, and that I had been able, for the first time in my life, to make a full and frank declaration of my sexuality, facing the world openly, with no more guilty secrets locked up inside me.
In February, I felt I had to be equally open about my cancer — and facing death. I was, in fact, in the hospital when my essay on this, “My Own Life,” was published in this newspaper. In July I wrote another piece for the paper, “My Periodic Table,” in which the physical cosmos, and the elements I loved, took on lives of their own.
And now, weak, short of breath, my once-firm muscles melted away by cancer, I find my thoughts, increasingly, not on the supernatural or spiritual, but on what is meant by living a good and worthwhile life — achieving a sense of peace within oneself. I find my thoughts drifting to the Sabbath, the day of rest, the seventh day of the week, and perhaps the seventh day of one’s life as well, when one can feel that one’s work is done, and one may, in good conscience, rest.
Cinema! Cinema! Part 1 - La Nouvelle Vague
Siouxsie & The Banshees - Rhapsody
Doudou N’diaye Rose (1928 - 2015)
The Psychotropic Internet GIFs of Peekasso
Put the internet, vintage TV, and C-SPAN's funniest home videos into a blender, and what you pour out might look something like the GIF art of German-American artist Peekasso. A quick glance at his Tumblr melts eyes with an avalanche of strobing fluorescent colors, heavily Photoshopped cultural icons, and ideological statements that range from the subtle and thought-provoking, to the politically incorrect, over-the-top, and unabashedly honest.
Peekasso, whose given name is Peter Stemmler, immigrated to the United States in 1997, and started the successful illustration company Quickhoney with artist Nana Rausch three years later. In 2007, he began putting personal projects on a the Peekasso Tumblr, filling it with stylized memes of Spock, Mr. T, and then-Senator Obama. In 2011 he began experimenting with GIFs, "out of boredom," he tells The Creators Project. Here's his very first one. His frenetic GIF art style has developed over the last five years, through hundreds of graphic experiments mixing corporate and political branding, pornography, and nostaliga into a miasma of inside jokes and discomfort that reflects the miasma of online culture. "I like to see myself changing," he says. "I don't mind my old work, but now I'm faster, more secure in my decisions, and more political."
Brief Candle in the Dark: My Life in Science by Richard Dawkins
Steven Shapin in The Guardian:
Richard Dawkins has had a wonderful life. He’s been happy in his scientific work on evolution, blessed (if that’s a permissible word) by smooth good looks and contented in his (third) marriage. He’s been given joy by his collaborators and colleagues and taken pleasure in poetry and music, even religious music. He’s collected bouquets of honorary degrees, including one from Valencia, which, he tells us, gave special delight because it came with a “tasselled lampshade” cap, and he has both an asteroid and a genus of fish named after him. Oxford college life has been sweet, and he’s been fulfilled by his role as public intellectual, defender of scientific reason, secular saint and hammer of the godly, switching from the zoology department in 1995 to a new endowed chair which allowed him to work full-time on “the public understanding of science”. His books – from The Selfish Gene (1976), River Out of Eden (1995) and The God Delusion (2006) to the first volume of his autobiography An Appetite for Wonder (2013) – have been successful, well-received, and, as Dawkins proudly notes, are all still in print. They have sold extraordinarily well – more than 3m copies of The God Delusion alone – making their author comfortably off as well as famous. According to the notions he coined, both selfish genes and memes want to make lots of copies of themselves, but there must be some genes or memes that haven’t been as successful as Dawkins himself.
Where once the humanists and philosophers were cocks of the cultural walk, now Dawkins can claim without argument that there are “deep philosophical questions that only science can answer”. There are no mysteries, just as-yet-unsolved scientific problems: “Life is just bytes and bytes and bytes of digital information.” The culture wars are over; science has won and Dawkins is confident that he has played a non-trivial role in that victory. Surveying the enormous change in the public prestige of science since CP Snow’s The Two Cultures (1959), he takes satisfaction that his books have been “among those that changed the cultural landscape”. Snow complained that, for some unfathomable reason, scientists were not counted as “intellectuals”. That has all changed. In 2013, readers of Prospect magazine voted Dawkins the world’s “top thinker”.
Saturday, August 29, 2015
Too Much Information
Elena Fagotto and Archon Fung in Boston Review:
Americans eat out more than ever before, and their waistlines are showing it. Restaurant foods pack more calories than most patrons imagine—a single entrée or shake can contain as many as 2,000 calories—contributing to the epidemic of obesity, which affects a third of the adult population. Will information help Americans to take better care of themselves? We will soon find out. Due to new regulations, by the end of 2015, calorie counts will appear on the menus and menu boards of large restaurant chains, grocery stores, and even movie theaters.
The calorie-disclosure rule is just one of the recent attempts at legislating transparency in the hope of changing behavior without resorting to more invasive and politically difficult regulatory approaches such as banning products or setting specific product standards. For instance, police departments in Seattle, Phoenix, and Albuquerque have deployed body cameras to reduce police violence, and in December of last year, the White House called for funding to purchase an additional 50,000. Faith in cameras seems well placed: after Rialto, California, adopted cameras, the use of force by police officers dropped by almost 60 percent and complaints declined by almost 90 percent. Transparency has also been used to inspire resource conservation. U.S. utility companies have found that by sending customers information about how their energy usage compares to their neighbors’, they can induce those customers to cut down. In another example, the incidence of food-borne illnesses decreased in Los Angeles after local laws began requiring restaurants to post cleanliness scores they received from hygiene inspections. And thanks to other disclosure requirements, you can learn about school performance, local water quality, crime levels on university campuses, and vehicle safety. The Supreme Court and many others have looked to disclosure as a bulwark against the corrosive effect of money on our democratic political institutions. The applications of transparency seem boundless, its promise to empower consumers and citizens and to discipline corporations and governments considerable.
But more information does not always make things better. Where there is a glut of information, it is often ignored. Worse still, it can be misused and cause harm.
The Movies of My Youth
Italo Calvino in The New York Review of Books:
I went to the cinema in the afternoon, secretly fleeing from home, or using study with a classmate as an excuse, because my parents left me very little freedom during the months when school was in session. The urge to hide inside the cinema as soon as it opened at two in the afternoon was the proof of true passion. Attending the first screening had a number of advantages: the half-empty theater, it was like I had it all to myself, would allow me to stretch out in the middle of the third row with my legs on the back of the seat in front of me; the hope of returning home without anyone finding out about my escape, in order to receive permission to go out once again later on (and maybe see another film); a light daze for the rest of the afternoon, detrimental to studying but advantageous for daydreaming. And in addition to these explanations that were unmentionable for various reasons, there was another more serious one: entering right when it opened guaranteed the rare privilege of seeing the movie from the beginning and not from a random moment toward the middle or the end, because that was what usually happened when I got to the cinema later in the afternoon or toward the evening.
Italian spectators barbarously made entering after the film already started a widespread habit, and it still applies today. We can say that back then we already anticipated the most sophisticated of modern narrative techniques, interrupting the temporal thread of the story and transforming it into a puzzle to put back together piece by piece or to accept in the form of a fragmentary body. To console us further, I’ll say that attending the beginning of the film after knowing the ending provided additional satisfaction: discovering not the unraveling of mysteries and dramas, but their genesis; and a vague sense of foresight with respect to the characters. Vague: just like soothsayers’ visions must be, because the reconstruction of the broken plot wasn’t always easy, especially if it was a detective movie, where identifying the murderer first and the crime afterward left an even darker area of mystery in between. What’s more, sometimes a part was still missing between the beginning and the end, because suddenly while checking my watch I’d realize I was running late; if I wanted to avoid my family’s wrath I had to leave before the scene that was playing when I entered came back on.
Kieran Healy over at his website:
Abstract: Seriously, fuck it.
As alleged virtues go, nuance is superficially attractive. Isn’t the mark of a good thinker the ability to see subtle differences in kind or gracefully shade the meaning terms? Shouldn’t we cultivate the ability to insinuate overtones of meaning in our con- cepts? Further, isn’t nuance especially appropriate to the difficult problems we study? I am sure that, like mine, your research problems are complex, rich, and multi-faceted. (Why would you study them if they were simple, thin, and one-dimensional?) When faced with problems like that, a cultivated capacity for nuance might seem to reflect both the difficulty of the topic and the sophistication of the researcher approaching it. I am sure that, like me, you are a sophisticated thinker. When sophisticated people like us face this rich and complex world, how can nuance not be the wisest approach?
It would be foolish, not to say barely comprehensible, for me to try to argue against the idea of nuance in general. That would be like arguing against the idea of yellow, or the concept of ostriches. It does not make much sense, in any case, to think of nuance as something that has a distinctive role all of its own in theory, or as something that we can add to or take away from theory just as we please. That is a bit like the author whom Mary McCarthy described busily revising a short story in order to “put in the symbols” (Goodman 1978, 58). What I will call “Actually-Existing Nuance” in sociological theory refers to a common and specific phenomenon, one most everyone working in Sociology has witnessed, fallen victim to, or perpetrated at some time. It is the act of making—or the call to make—some bit of theory “richer” or “more sophisticated” by adding complexity to it, usually by way of some additional dimension, level, or aspect, but in the absence of any strong means of disciplining or specifying the relationship between the new elements and the existing ones. Theorists do this to themselves and demand it of others. It is typically a holding maneuver. It is what you do when faced with a question that you do not yet have a compelling or interesting answer to. Thinking up compelling or interesting ideas is quite difficult, and so often it is easier to embrace complexity than cut through it.
Kate Darling: Robots, Human, and Artificial Intellince
‘The End of Tsarist Russia,’ by Dominic Lieven
World War I was the greatest empire slayer of all time. Down went the Ottoman Empire, ruling from Bosnia to Basra. Hapsburg shrank into tiny Austria. Germany and Russia remained largely intact, but Wilhelm II ended up in exile, while the Romanovs were murdered by the Bolsheviks. Exit sultans and kaisers; enter authoritarians and totalitarians.
The irony can’t be topped. All four dynastic regimes went to war for the usual reasons: security, power and possession — as did democratic France, Britain and the United States. But beset by indomitable nationality and class conflicts, they also fought for sheer regime survival, following Henry IV’s counsel, in Shakespeare’s words, to “busy giddy minds with foreign quarrels.”
It was a momentous miscalculation that would transform 20th-century history. Had the old despots been gifted with foresight, they would have opted for peace über alles.
This is the takeoff point for Dominic Lieven’s book “The End of Tsarist Russia.” The tomes on the Great War fill a small library by now. Since history is written by the victors, the first batch fingered the German Reich as starring culprit; later works spread out along an explanatory spectrum that ranged from inevitability to contingency.
‘The Invention of Science: A New History of the Scientific Revolution’, by David Wootton
It is almost impossible to overstate the significance of the scientific revolution. As David Wootton’s masterly The Invention of Science shows, it was nothing less than the triumph of the future over the past. Before it, Aristotle had been the leading authority on nature and philosophers had sought above all to recover the lost culture of the ancients. Afterwards, the idea that new knowledge was possible had become axiomatic.
According to Wootton, who is anniversary professor of history at the University of York, modern science was invented between 1572, when the astronomer Tycho Brahe saw a new star in the sky (proof that the heavens could change), and 1704, when Isaac Newton published his book Opticks, which drew conclusions on the nature of light, based on experiments. Everything changed within those decades — even, Wootton contends, the very language used to understand the world. Indeed, one of the premises of The Invention of Science is that “a revolution in ideas requires a revolution in language”.
Take the word “discovery.” Wootton argues that when Christopher Columbus discovered America in 1492, he didn’t have a word to describe what he had done. The nearest Latin verbs were invenio (find out), which Columbus used, reperio (obtain), which was employed by Johannes Stradanus in the title of his book of engravings depicting the new discoveries, and exploro (explore), which Galileo used to report his sightings of Jupiter’s moons.