Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »

Building a Dyson sphere using ChatGPT

by Ashutosh Jogalekar

Artist’s rendering of a Dyson sphere (Image credit)

In 1960, physicist Freeman Dyson published a paper in the journal Science describing how a technologically advanced civilization would make its presence known. Dyson’s assumption was that whether an advanced civilization signals its intelligence or hides it from us, it would not be able to hide the one thing that’s essential for any civilization to grow – energy. Advanced civilizations would likely try to capture all the energy of their star to grow.

For doing this, borrowing an idea from Olaf Stapledon, Dyson imagined the civilization taking apart a number of the planets and other material in their solar system to build a shell of material that would fully enclose their planet, thus capturing far more of the heat than what they could otherwise. This energy-capturing sphere would radiate its enormous waste heat out in the infrared spectrum. So one way to find out alien civilizations would be to look for signatures of this infrared radiation in space. Since then these giant spheres – later sometimes imagined as distributed panels rather than single continuous shells – that can be constructed by advanced civilizations to capture their star’s energy have become known as Dyson spheres. They have been featured in science fiction books and TV shows including Star Trek.

I asked AI engine chatGPT to build me a hypothetical 2 meter thick Dyson sphere at a distance of 2 AU (~300 million kilometers). I wanted to see how efficiently chatGPT harnesses information from the internet to give me specifics and how well its large language model (LLM) of computation understood what I was saying. Read more »

Hyperintelligence: Art, AI, and the Limits of Cognition

by Jochen Szangolies

Deep Blue, at the Computer History Museum in California. Image Credit: James the photographer, CC BY 2.0, via Wikimedia Commons

On May 11, 1997, chess computer Deep Blue dealt then-world chess champion Garry Kasparov a decisive defeat, marking the first time a computer system was able to defeat the top human chess player in a tournament setting. Shortly afterwards, AI chess superiority firmly established, humanity abandoned the game of chess as having now become pointless. Nowadays, with chess engines on regular home PCs easily outsmarting the best humans to ever play the game, chess has become relegated to a mere historical curiosity and obscure benchmark for computational supremacy over feeble human minds.

Except, of course, that’s not what happened. Human interest in chess has not appreciably waned, despite having had to cede the top spot to silicon-based number-crunchers (and the alleged introduction of novel backdoors to cheating). This echoes a pattern well visible throughout the history of technological development: faster modes of transportation—by car, or even on horseback—have not eliminated human competitive racing; great cranes effortlessly raising tonnes of weight does not keep us from competitively lifting mere hundreds of kilos; the invention of photography has not kept humans from drawing realistic likenesses.

Why, then, worry about AI art? What we value, it seems, is not performance as such, but specifically human performance. We are interested in humans racing or playing each other, even in the face of superior non-human agencies. Should we not expect the same pattern to continue: AI creates art equal to or exceeding that of its human progenitors, to nobody’s great interest? Read more »

Acting Machines

by Fabio Tollon

Fritzchens Fritz / Better Images of AI / GPU shot etched 1 / CC-BY 4.0

Machines can do lots of things. Robotic arms can help make our cars, autonomous cars can drive us around, and robotic vacuums can clean our floors. In all of these cases it seems natural to think that these machines are doing something. Of course, a ‘doing’ is a kind of happening: when something is done, usually something happens, namely, an event. Brushing my teeth, going for a walk, and turning on the light are all things that I do, and when I do them, something happens (events). We might think the same thing about robotic arms, autonomous vehicles, and robotic vacuum cleaners. All these systems seem to be doing something, which then leads to an event occurring.  However, in the case of humans, we often think of what we do in terms of agency: when we do perform an action things are not just happening (in a passive sense). Rather, we are acting, we are exercising our agency, we are agents. Can machines be agents? Is there something like artificial agency? Well, as with most things in philosophy, it depends.

Agency, in its human form, is usually about our mental states. It therefore seems natural to think that in order for something or other to be an agent, it should at least in principle have something like mental states (in the form of, for example, beliefs and desires). More than this, in order for an action to be properly attributable to an agent we might insist that the action they perform be caused by their mental states. Thus, we might say that for an entity to be considered an agent it should be possible to explain their behaviour by referring to their mental states. Read more »

Clever Cogs: Ants, AI, And The Slippery Idea Of Intelligence

by Jochen Szangolies

Figure 1: The Porphyrian Tree. Detail of a fresco at the Kloster Schussenried. Image credit: modified from Franz Georg Hermann, Public domain, via Wikimedia Commons.

The arbor porphyriana is a scholastic system of classification in which each individual or species is categorized by means of a sequence of differentiations, going from the most general to the specific. Based on the categories of Aristotle, it was introduced by the 3rd century CE logician Porphyry, and a huge influence on the development of medieval scholastic logic. Using its system of differentiae, humans may be classified as ‘substance, corporeal, living, sentient, rational’. Here, the lattermost term is the most specific—the most characteristic of the species. Therefore, rationality—intelligence—is the mark of the human.

However, when we encounter ‘intelligence’ in the news, these days, chances are that it is used not as a quintessentially human quality, but in the context of computation—reporting on the latest spectacle of artificial intelligence, with GPT-3 writing scholarly articles about itself or DALL·E 2 producing close-to-realistic images from verbal descriptions. While this sort of headline has become familiar, lately, a new word has risen in prominence at the top of articles in the relevant publications: the otherwise innocuous modifier ‘general’. Gato, a model developed by DeepMind, we’re told is a ‘generalist’ agent, capable of performing more than 600 distinct tasks. Indeed, according to Nando de Freitas, team lead at DeepMind, ‘the game is over’, with merely the question of scale separating current models from truly general intelligence.

There are several interrelated issues emerging from this trend. A minor one is the devaluation of intelligence as the mark of the human: just as Diogenes’ plucked chicken deflates Plato’s ‘featherless biped’, tomorrow’s AI models might force us to rethink our self-image as ‘rational animals’. But then, arguably, Twitter already accomplishes that.

Slightly more worrying is a cognitive bias in which we take the lower branches of Porphyry’s tree to entail the higher ones. Read more »

Does AI Need Free Will to be held Responsible?

by Fabio Tollon

We have always been a technological species. From the use of basic tools to advanced new forms of social media, we are creatures who do not just live in the world but actively seek to change it. However, we now live in a time where many believe that modern technology, especially advances driven by artificial intelligence (AI), will come to challenge our responsibility practices. Digital nudges can remind you of your mother’s birthday, ToneCheck can make sure you only write nice emails to your boss, and your smart fridge can tell you when you’ve run out of milk. The point is that our lives have always been enmeshed with technology, but our current predicament seems categorically different from anything that has come before. The technologies at our disposal today are not merely tools to various ends, but rather come to bear on our characters by importantly influencing many of our morally laden decisions and actions.

One way in which this might happen is when sufficiently autonomous technology “acts” in such a way as to challenge our usual practices of ascribing responsibility. When an AI system performs an action that results in some event that has moral significance (and where we would normally deem it appropriate to attribute moral responsibility to human agents) it seems natural that people would still have emotional responses in these situations. This is especially true if the AI is perceived as having agential characteristics. If a self-driving car harms a human being, it would be quite natural for bystanders to feel anger at the cause of the harm. However, it seems incoherent to feel angry at a chunk of metal, no matter how autonomous it might be.

Thus, we seem to have two questions here: the first is whether our responses are fitting, given the situation. The second is an empirical question of whether in fact people will behave in this way when confronted with such autonomous systems. Naturally, as a philosopher, I will try not to speculate too much with respect to the second question, and thus what I say here is mostly concerned with the first. Read more »

Irrationality, Artificial Intelligence, and the Climate Crisis

by Fabio Tollon

Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.

This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov. Read more »

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »

The ethics of regulating AI: When too much may be bad

by Ashutosh Jogalekar

Areopagitica‘ was a famous speech delivered by the poet John Milton in the English Parliament in 1644, arguing for the unlicensed printing of books. It is one of the most famous speeches in favor of freedom of expression. Milton was arguing against a parliamentary ordinance requiring authors to get a license for their works before they could be published. Delivered during the height of the English Civil War, Milton was well aware of the power of words to inspire as well as incite. He said,

For books are not absolutely dead things, but do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men…

What Milton was saying is not that books and words can never incite, but that it would be folly to restrict or ban them before they have been published. This appeal toward withholding restraint before publication found its way into the United States Constitution and has been a pillar of freedom of expression and the press since.

Why was Milton opposed to pre-publication restrictions on books? Not just because he realized that it was a matter of personal liberty, but because he realized that restricting a book’s contents means restricting the very power of the human mind to come up with new ideas. He powerfully reminded Parliament,

Who kills a man kills a reasonable creature, God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were, in the eye. Many a man lives a burden to the earth; but a good book is the precious lifeblood of a master spirit, embalmed and treasured up on purpose to a life beyond life.

Milton saw quite clearly that the problem with limiting publication is in significant part a problem with trying to figure out all the places a book can go. The same problem arises with science. Read more »

The Lobster and the Octopus: Thinking, Rigid and Fluid

by Jochen Szangolies

Fig. 1: The lobster exhibiting its signature move, grasping and cracking the shell of a mussel. Still taken from this video.

Consider the lobster. Rigidly separated from the environment by its shell, the lobster’s world is cleanly divided into ‘self’ and ‘other’, ‘subject’ and ‘object’. One may suspect that it can’t help but conceive of itself as separated from the world, looking at it through its bulbous eyes, probing it with antennae. The outside world impinges on its carapace, like waves breaking against the shore, leaving it to experience only the echo within.

Its signature move is grasping. With its pincers, it is perfectly equipped to take hold of the objects of the world, engage with them, manipulate them, take them apart. Hence, the world must appear to it as a series of discrete, well-separated individual elements—among which is that special object, its body, housing the nuclear ‘I’ within. The lobster embodies the primal scientific impulse of cracking open the world to see what it is made of, that has found its greatest expression in modern-day particle colliders. Consequently, its thought (we may imagine) must be supremely analytical—analysis in the original sense being nothing but the resolution of complex entities into simple constituents.

The lobster, then, is the epitome of the Cartesian, detached, rational self: an island of subjectivity among the waves, engaging with the outside by means of grasping, manipulating, taking apart—analyzing, and perhaps synthesizing the analyzed into new concepts, new creations. It is forever separated from the things themselves, only subject to their effects as they intrude upon its unyielding boundary. Read more »

An Electric Conversation with Hollis Robbins on the Black Sonnet Tradition, Progress, and AI, with Guest Appearances by Marcus Christian and GPT-3

by Bill Benzon

I was hanging out on Twitter the other day, discussing my previous 3QD piece (about Progress Studies) with Hollis Robbins, Dean of Arts and Humanities at Cal State at Sonoma. We were breezing along at 240 characters per message unit when, Wham! right out of the blue the inspiration hit me: How about an interview?

Thus I have the pleasure of bringing another Johns Hopkins graduate into orbit around 3QD. Hollis graduated in ’83; Michael Liss, right about the corner, in ’77; and Abbas Raza, our editor, in ’85; I’m class of  ’69. Both of us studied with and were influenced by the late Dick Macksey, a humanist polymath at Hopkins with a fabulous rare book collection. I know Michael took a course with Macksey and Abbas, alas, he missed out, but he met Hugh Kenner, who was his girlfriend’s advisor.

Robbins has also been Director of the Africana Studies program at Hopkins and chaired the Department of Humanities at the Peabody Institute. Peabody was an independent school when I took trumpet lessons from Harold Rehrig back in the early 1970s. It started dating Hopkins in 1978 and they got hitched in 1985.

And – you see – another connection. Robbins’ father played trumpet in the jazz band at Rensselaer Polytechnic Institute in the 1950s. A quarter of a century later I was on the faculty there and ventured into the jazz band, which was student run.

It’s fate I call it, destiny, kismet. [Social networks, fool!]

Robbins has published this and that all over the place, including her own poetry, and she’s worked with Henry Louis “Skip” Gates, Jr. to give us The Annotated Uncle Tom’s Cabin (2006). Not only was Uncle Tom’s Cabin a best seller in its day (mid-19th century), but an enormous swath of popular culture rests on its foundations. If you haven’t yet done so, read it.

She’s here to talk about her most recent book, just out: Forms of Contention: Influence and the African American Sonnet Tradition. Read more »

Context Collapse: A Conversation with Ryan Ruby

by Andrea Scrima

Ryan Ruby is a novelist, translator, critic, and poet who lives, as I do, in Berlin. Back in the summer of 2018, I attended an event at TOP, an art space in Neukölln, where along with journalist Ben Mauk and translator Anne Posten, his colleagues at the Berlin Writers’ Workshop, he was reading from work in progress. Ryan read from a project he called Context Collapse, which, if I remember correctly, he described as a “poem containing the history of poetry.” But to my ears, it sounded more like an academic paper than a poem, with jargon imported from disciplines such as media theory, economics, and literary criticism. It even contained statistics, citations from previous scholarship, and explanatory footnotes, written in blank verse, which were printed out, shuffled up, and distributed to the audience. Throughout the reading, Ryan would hold up a number on a sheet of paper corresponding to the footnote in the text, and a voice from the audience would read it aloud, creating a spatialized, polyvocal sonic environment as well as, to be perfectly honest, a feeling of information overload. Later, I asked him to send me the excerpt, so I could delve deeper into what he had written at a slower pace than readings typically afford—and I’ve been looking forward to seeing the finished project ever since. And now that it is, I am publishing the first suite of excerpts from Context Collapse at Statorec, where I am editor-in-chief.

Andrea Scrima: Ryan, I wonder if it wouldn’t be a good idea to start with a little context. Tell us about the overall sweep of your poem, and how, since you mainly work in prose, you began writing it.

Ryan Ruby: Thank you for this very kind introduction, Andrea! That was a particularly memorable evening for me too, as my partner was nine months pregnant at the time, and I was worried that we’d have to rush to the hospital in the middle of the reading. But you remember quite well: a poem containing the history of poetry, with a tip of the hat to Ezra Pound, of course, who described The Cantos as “a poem containing history.” Read more »

We Have To Talk

by Thomas O’Dwyer

Henri Matisse created many paintings titled 'The Conversation'. This, from 2012, is of the artist with his wife, Amélie. [Hermitage Museum, St. Petersburg, Russia].
Henri Matisse created many paintings titled ‘The Conversation’. This, from 2012, is of the artist with his wife, Amélie. [Hermitage Museum, St. Petersburg, Russia].
Alice’s Adventures in Wonderland is not so much a book of fantastic adventures as a book of conversations (and pictures). It’s right there, in the first paragraph: “What is the use of a book,” thought Alice, “without pictures or conversations?” Lewis Carroll and his illustrator John Tenniel delivered just that, a magical masterpiece of conversations and images. A contemporary reviewer said it would “belong to all the generations to come until the language becomes obsolete.” Six generations later, the language shows no sign of obsolescence, but the same cannot be said of conversations if the great oracle at Google is correct. One million hits for “the death of conversation,” it proclaims, listing a gloomy parade of studies and essays stretching back many years.

“Every visit to California convinces me that the digital revolution is over, by which I mean it is won. Everyone is connected. The New York Times has declared the death of conversation,” Simon Jenkins grumbled in The Guardian, seven years ago. Is it true, and if it is, who cares? That sounds like the start of an interesting discussion. Is daily conversation of any value and if it fades away, who’s to say the time saved can’t be better used? Robert Frost thought that “half the world is people who have something to say and can’t, and the other half who have nothing to say and keep on saying it.” Read more »

Cerebral Imperialism

Neurons The present is where the future comes to die, or more accurately, where an infinite array of possible futures all collapse into one. We live in a present where artificial intelligence hasn't been invented, despite a quarter century of optimistic predictions. John Horgan in Scientific American suggests we're a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass). However and whenever (or if ever) it arrives, it's an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

___________________________________

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James “J” Hughes. One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some Transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

You’re right. A lot of our Transhumanist subculture comes out of computer science— male computer science—so a lot of them have that traditional “intelligence is everything” view. s soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer – one of many Hughes offers in the interview – but I was going somewhere else: toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Read more »