Analogia: A Conversation with George Dyson

by Ashutosh Jogalekar

George Dyson is a historian of science and technology who has written books about topics ranging from the building of a native kayak (“Baidarka”) to the building of a spaceship powered by nuclear bombs (“Project Orion”). He is the author of the bestselling books “Turing’s Cathedral” and “Darwin Among the Machines” which explore the multifaceted ramifications of intelligence, both natural and artificial. George is also the son of the late physicist, mathematician and writer Freeman Dyson, a friend whose wisdom and thinking we both miss.

George’s latest book is called “Analogia: The Emergence of Technology Beyond Programmable Human Control”. It is in part a fascinating and wonderfully eclectic foray into the history of diverse technological innovations leading to the promises and perils of AI, from the communications network that allowed the United States army to gain control over the Apache Indians to the invention of the vacuum tube to the resurrection of analog computing. It is also a deep personal exploration of George’s own background in which he lived in a treehouse and gained mastery over the ancient art of Aleut baidarka building. I am very pleased to speak with George about these ruminations. I would highly recommend that readers listen to the entire conversation, but if you want to jump to snippets of specific topics, you can click on the timestamps below, after the video.

7:51 We talk about lost technological knowledge. George makes the point that it’s really the details that matter, and through the gradual extinction of practitioners and practice we stand in real danger of losing knowledge that can elevate humanity. Whether it’s the art of building native kayaks or building nuclear bombs for peaceful purposes, we need ways to preserve the details of knowledge of technology.

12:49 Digital versus analog computing. The distinction is fuzzy: As George says, “You can have digital computers made out of wood and you can have analog computers made out of silicon.” We talk about how digital computing became so popular in part because it was so cheap and made so much money. Ironically, we are now witnessing the growth of giant analog network systems built on a digital substrate.

21:22 We talk about Leo Szilard, the pioneering, far-sighted physicist who was the first to think of a nuclear chain reaction while crossing a traffic light in London in 1933. Szilard wrote a novel titled “The Voice of the Dolphins” which describes a group of dolphins trying to rescue humanity from its own ill-conceived inventions, an oddly appropriate metaphor for our own age. George talks about the formative influence of Trudy Szilard, Leo’s wife, who used to snatch him out of boring school lessons and take him to lunch, where she would have a pink martini and they would talk. Read more »

Are We Asking the Right Questions About Artificial Moral Agency?

by Fabio Tollon

Human beings are agents. I take it that this claim is uncontroversial. Agents are that class of entities capable of performing actions. A rock is not an agent, a dog might be. We are agents in the sense that we can perform actions, not out of necessity, but for reasons. These actions are to be distinguished from mere doings: animals, or perhaps even plants, may behave in this or that way by doing things, but strictly speaking, we do not say that they act.

It is often argued that action should be cashed out in intentional terms. Our beliefs, what we desire, and our ability to reason about these are all seemingly essential properties that we might cite when attempting to figure out what makes our kind of agency (and the actions that follow from it) distinct from the rest of the natural world. For a state to be intentional in this sense it should be about or directed towards something other than itself. For an agent to be a moral agent it must be able to do wrong, and perhaps be morally responsible for its actions (I will not elaborate on the exact relationship between being a moral agent and moral responsibility, but there is considerable nuance in how exactly these concepts relate to each other).

In the debate surrounding the potential of Artificial Moral Agency (AMA) this “Standard View” presented above is often a point of contention. The ubiquity of artificial systems in our lives can often lead to us believing that these systems are merely passive instruments. However, this is not always necessarily the case. It is becoming increasingly clear that intuitively “passive” systems, such as recommender algorithms (or even email filter bots), are very receptive to inputs (often by design). Specifically, such systems respond to certain inputs (user search history, etc.) in order to produce an output (a recommendation, etc.). The question that emerges is whether such kinds of “outputs” might be conceived of as “actions”. Moreover, what if such outputs have moral consequences? Might these artificial systems be considered moral agents? This is not to necessarily claim that recommender systems such as YouTube’s are in fact (moral) agents, but rather to think through whether this might be possible (now or in the future). Read more »

Are we being manipulated by artificially intelligent software agents?

by Michael Klenk

Someone else gets more quality time with your spouse, your kids, and your friends than you do. Like most people, you probably enjoy just about an hour, while your new rivals are taking a whopping 2 hours and 15 minutes each day. But save your jealousy. Your rivals are tremendously charming, and you have probably fallen for them as well.

I am talking about intelligent software agents, a fancy name for something everyone is familiar with: the algorithms that curate your Facebook newsfeed, that recommend the next Netflix film to watch, and that complete your search query on Google or Bing.

Your relationships aren’t any of my business. But I want to warn you. I am concerned that you, together with the other approximately 3 billion social media users, are being manipulated by intelligent software agents online.

Here’s how. The intelligent software agents that you interact with online are ‘intelligent agents’ in the sense that they try to predict your behaviour taking into account what you did in your online past (e.g. what kind of movies you usually watch), and then they structure your options for online behaviour. For example, they offer you a selection of movies to watch next.

However, they do not care much for your reasons for action. How could they? They analyse and learn from your past behaviour, and mere behaviour does not reveal reasons. So, they likely do not understand what your reasons are and, consequently, cannot care for it.

Instead, they are concerned with maximising engagement, a specific type of behaviour. Intelligent software agents want you to keep interacting with them: To watch another movie, to read another news-item, to check another status update. The increase in the time we spend online, especially on social media, suggests that they are getting quite good at this. Read more »

Artificial Stupidity

by Ali Minai

"My colleagues, they study artificial intelligence; me, I study natural stupidity." —Amos Tversky, (quoted in “The Undoing Project” by Michael Lewis).

Humans-vs-AINot only is this quote by Tversky amusing, it also offers profound insight into the nature of intelligence – real and artificial. Most of us working on artificial intelligence (AI) take it for granted that the goal is to build machines that can reason better, integrate more data, and make more rational decisions. What the work of Daniel Kahneman and Amos Tversky shows is that this is not how people (and other animals) function. If the goal in artificial intelligence is to replicate human capabilities, it may be impossible to build intelligent machines without "natural stupidity". Unfortunately, this is something that the burgeoning field of AI has almost completely lost sight of, with the result that AI is in danger of repeating the same mistakes in the matter of building intelligent machines as classical economists have made in their understanding of human behavior. If this does not change, homo artificialis may well end up being about as realistic as homo economicus.

The work of Tversky and Kahneman focused on showing systematically that much of intelligence is not rational. People don’t make all decisions and inferences by mathematically or logically correct calculation. Rather, they are made based on rules of thumb – or heuristics – driven not by analysis but by values grounded in instinct, intuition and emotion: Kludgy short-cuts that are often “wrong” or sub-optimal, but usually “good enough”. The question is why this should be the case, and whether it is a “bug” or a “feature”. As with everything else about living systems, Dobzhansky’s brilliant insight provides the answer: This too makes sense only in the light of evolution.

The field of AI began with the conceit that, ultimately, everything is computation, and that reproducing intelligence – even life itself – was only a matter of finding the “correct” algorithms. As six decades of relative failure have demonstrated, this hypothesis may be true in an abstract formal sense, but is insufficient to support a practical path to truly general AI. To elaborate Feynman, Nature’s imagination has turned out to be much greater than that of professors and their graduate students. The antidote to this algorithm-centered view of AI comes from the notion of embodiment, which sees mental phenomena – including intelligence and behavior – as emerging from the physical structures and processes of the animal, much as rotation emerges from a pinwheel when it faces a breeze. From this viewpoint, the algorithms of intelligence are better seen, not as abstract procedures, but as concrete dynamical responses inherent in the way the structures of the organism – from the level of muscles and joints down to molecules – interact with the environment in which they are embedded.

Read more »

Fearing Artificial Intelligence

by Ali Minai

ScreenHunter_1341 Aug. 31 10.48Artificial Intelligence is on everyone's mind. The message from a whole panel of luminaries – Stephen Hawking, Elon Musk, Bill Gates, Apple founder Steve Wozniak, Lord Martin Rees, Astronomer Royal of Britain and former President of the Royal Society, and many others – is clear: Be afraid! Be very afraid! To a public already immersed in the culture of Star Wars, Terminator, the Matrix and the Marvel universe, this message might sound less like an expression of possible scientific concern and more a warning of looming apocalypse. It plays into every stereotype of the mad scientist, the evil corporation, the surveillance state, drone armies, robot overlords and world-controlling computers a la Skynet. Who knows what “they” have been cooking up in their labs? Asimov's three laws of robotics are being discussed in the august pages of Nature, which has also recently published a multi-piece report on machine intelligence. In the same issue, four eminent experts discuss the ethics of AI. Some of this is clearly being driven by reports such as the latest one from Google's DeepMind, claiming that their DQN system has achieved “human-level intelligence”, or that a robot called Eugene had “passed the Turing Test“. Another legitimate source of anxiety is the imminent possibility of lethal autonomous weapon systems (LAWS) that will make life-and-death decisions without human intervention. This has led recently to the circulation of an open letter expressing concern about such weapons, and it has been signed by hundreds of other scientists, engineers and innovators, including Musk, Hawking and Gates. Why is this happening now? What are the factors driving this rather sudden outbreak of anxiety?

Looking at the critics' own pronouncements, there seem to be two distinct levels of concern. The first arises from rapid recent progress in the automation of intelligent tasks, including many involving life-or-death decisions. This issue can be divided further into two sub-problems: The socioeconomic concern that computers will take away all the jobs that humans do, including the ones that require intelligence; and the moral dilemma posed by intelligent machines making life-or-death decisions without human involvement or accountability. These are concerns that must be faced in the relatively near term – over the next decade or two.

The second level of concern that features prominently in the pronouncements of Hawking, Musk, Wozniak, Rees and others is the existential risk that truly intelligent machines will take over the world and destroy or enslave humanity. This threat, for all its dark fascination, is still a distant one, though perhaps not as distant as we might like.

In this article, I will consider these two cases separately.

Read more »

Cerebral Imperialism

Neurons The present is where the future comes to die, or more accurately, where an infinite array of possible futures all collapse into one. We live in a present where artificial intelligence hasn't been invented, despite a quarter century of optimistic predictions. John Horgan in Scientific American suggests we're a long way from developing it, despite all the optimistic predictions (although when it does come it may well be as a sudden leap into existence, a sudden achievement of critical mass). However and whenever (or if ever) it arrives, it's an idea worth discussing today. But, a question: Does this line of research suffer from “cerebral imperialism”?

___________________________________

The idea of “cerebral imperialism” came up in an interview I did for the current issue of Tricycle, a Buddhist magazine, with transhumanist professor and writer James “J” Hughes. One exchange went like this:

Eskow: There seems to be a kind of cognitive imperialism among some Transhumanists that says the intellect alone is “self.” Doesn’t saying “mind” is who we are exclude elements like body, emotion, culture, and our environment? Buddhism and neuroscience both suggest that identity is a process in which many elements co-arise to create the individual experience on a moment-by-moment basis. The Transhumanists seem to say, “I am separate, like a data capsule that can be uploaded or moved here and there.”

You’re right. A lot of our Transhumanist subculture comes out of computer science— male computer science—so a lot of them have that traditional “intelligence is everything” view. s soon as you start thinking about the ability to embed a couple of million trillion nanobots in your brain and back up your personality and memory onto a chip, or about advanced artificial intelligence deeply wedded with your own mind, or sharing your thoughts and dreams and feelings with other people, you begin to see the breakdown of the notion of discrete and continuous self.

An intriguing answer – one of many Hughes offers in the interview – but I was going somewhere else: toward the idea that cognition itself, that thing which we consider “mind,” is over-emphasized in our definition of self and therefore is projected onto our efforts to create something we call “artificial intelligence.”

Is the “society of mind” trying to colonize the societies of body and emotion?

Read more »