TALKING TO NICK BOSTROM

Andy Fitch at the Los Angeles Review of Books:

20527133ANDY FITCH:as we begin to outline Superintelligence’s broader arguments, could you also discuss its dexterous efforts at combining a call to public alarm and a proactive, context-shaping, transdisciplinary (philosophical, scientific, policy-oriented) blueprint for calm, clear, perspicacious decision-making at the highest levels? What types of anticipated and/or desired responses, from which types of readers, shaped your rhetorical calculus for this book?

NICK BOSTROM: I guess the answer is somewhat complex. There was a several-fold objective. One objective was to bring more attention to bear on the idea that if AI research were to succeed in its original ambition, this would be arguably the most important event in all of human history, and could be associated with an existential risk that should be taken seriously.

Another goal was to try to make some progress on this problem, such that after this progress had been made, people could see more easily specific research projects to pursue. It’s one thing to think If machines become superintelligent, they could be very powerful, they could be risky. But where do you go from there? How do you actually start to make progress on the control problem? How could you produce academic research on this topic? So to begin to break down this big problem into smaller problems, to develop the concepts that you need in order to start thinking about this, to do some of that intellectual groundwork was the second objective.

The third objective was just to fill in the picture in general for people who want to have more realistic views about what the future of humanity might look like, so that we can, perhaps, prioritize more wisely the scarce amount of research and attention that focuses on securing our long-term global future.

More here.