THE FERMI PARADOX, MASS EFFECT, AND TRANSHUMANISM

by Charlie Huenemann

Mass_effect-t2

The Fermi Paradox

The story is that sometime in the early 1950s, four physicists were walking to lunch and discussing flying saucers. The place was Los Alamos, and the lunch group included Enrico Fermi, Edward Teller, Emil Konopinski, and Herbert York. None of them believed in flying saucers, of course, but – and this is just the way such conversations go – the discussion turned to the possibility of faster-than-light space travel and the probability of life cropping up elsewhere in the galaxy. Fermi had a hunch that life shouldn’t be all that rare – it should be common, really – and that there was at least a ten percent “miracle chance” that supraluminal travel should prove possible. This led him to raise an exasperated question that drew laughter from the others: “Where is everybody?

Thus the Fermi paradox: in all this space, and all this time, there should be plenty of advanced alien civilizations – but we haven’t heard from any of them. How come?

The most conservative resolution of the paradox is to claim that the universe is in fact SO very big and SO very old that not only has intelligent life evolved all over the place, but the spaces and times separating them from one another are SO very vast that they can never be crossed. It would be like two children in Cuba and China releasing their balloons at the same time and expecting them to bump into each other.

But there are other possible and more tantalizing resolutions to the paradox. Maybe the aliens have checked us out already and decided to put us in galactic time-out; maybe they already walk among us; maybe tomorrow we will indeed make contact; maybe alien governments always decide to cut funding for alien NASA programs; maybe in fact we live in an alien-created virtual reality – and so on, down the long line of fantastic sci-fi literature. But I would like to focus on one resolution that, whether likely or not, raises in my mind some interesting philosophical questions. Maybe, by the time any civilization reaches the point at which they can reach out to other planets, they also have developed super-intelligent machines, and that is when all hell breaks lose.

This is called the “technological singularity” response to the Fermi paradox, and it has been on the mind of Elon Musk. Musk, of course, is the visionary tech billionaire behind Tesla Motors and SpaceX. In a recent interview with Ross Anderson published on Aeon, Musk revealed that he’s worried about Earth’s ability to continue to support us – so he’s gearing up to colonize Mars. And he takes the Fermi paradox very seriously:

At our current rate of technological growth, humanity is on a path to be godlike in its capabilities,’ Musk told me. ‘You could bicycle to Alpha Centauri in a few hundred thousand years, and that’s nothing on an evolutionary scale. If an advanced civilisation existed at any place in this galaxy, at any point in the past 13.8 billion years, why isn’t it everywhere? Even if it moved slowly, it would only need something like .01 per cent of the Universe’s lifespan to be everywhere. So why isn’t it?

As his bicycling comment makes clear, Musk isn’t buying the conservative resolution: he thinks the probability of space-exploring alien civilizations overruns the obstacle of the universe being a very big place. He worries there could be a more sinister explanation: “If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way. And it could be that there are a whole lot of dead, one-planet civilisations.” The strange and bad thing that happens is artificial intelligence, which on another occasion Musk identified as humanity’s biggest existential threat. His involvement in Google’s Deep Mind project – and those names alone, both “Google” and “Deep Mind”, let alone in conjunction with one another, should be enough to cause a shiver of fear – is motivated mainly by his concern to “keep an eye on” what is being developed. Our biggest and smartest tech company ever, it would seem, has Pandora’s box as an R & D project.

As I say, I’m only interested in exploring this idea – whether Musk is right or not in all this is a question for people whose prophetic powers exceed my own. And I find that the idea is imaginatively explored in the video game Mass Effect. I’m now going to turn to this game, and so SPOILER ALERT: I’m going to give away the ending.

Mass Effect

Mass Effect is a role-playing game in which you play the role of John or Jane Shepard, a no-nonsense, do-whatever-it-takes space marine who leads the fight against the Reapers, a race of super-advanced bug-like entities that seem to regard it as their duty to periodically wipe out civilizations. The game is rich in textures, with a surprisingly extensive backstory, and (as my own studies suggest) it is hugely fun to play. The story gradually builds over three installments, and at the end Shepard finally confronts the spokesman for the Reapers and gains the full picture of what they think they are doing.

The Reaper spokesman – who appears to Shepard as a young boy calling himself “the Catalyst” – reveals that there is an unavoidable conflict between organic and synthetic life forms: they are always fated to come to blows, and the robots invariably win, and invariably try to wipe out all organic life as soon as they have the chance. The conflict is inevitable. So, for millions of years, the Reapers have visited themselves upon advanced civilizations at the point of conflict, and have blasted everybody back into the stone age, thus allowing for organic evolution to start over again. (Well, almost everybody: they selectively take up the most advanced species, and turn them into Reapers for the next cycle.) So, as unlikely as it may seem, their motivation is actually benevolent: they cull civilizations so that organic life may be preserved over the long run. Their plan is to continue to Reap and Sow until some better solution comes along.

But the unstoppable Shepard has tipped the scales, and the Catalyst is willing to let Shepard (that’s you!) decide what should happen next. The options are to become a Reaper and take charge of the cycle; or to destroy the Reapers and all synthetic life forms; or to merge together all organic and synthetic life forms into some transcendent species of being. Each choice has its costs: the Fermi-miraculous travel from one star system to another is lost, civilization suffers a huge setback, and the human Shepard dies (though exceptionally clever players can find a route through the game which ends with Shepard’s battered survival).

It is the Hegelian flavor of the “merge together” option that intrigues me. Supposing that conflict with Super-AI is inevitable, and supposing that humans could merge with synthetic life forms – what would such a synthesis mean? In any classic Hegelian synthesis, the conflict between the two opposing forces is both preserved and transcended. In this sort of context, we think of organics as the beings with passions, limitations, intellect, courage, and free will. We think of the synthetics as more intelligent and stronger, but deterministic and unfeeling (or at least insensitive to the core human values we cherish). The synthesis of the two would be a state of being that overcomes the dichotomy between being passionate and free and being invincible and rule-governed. Perhaps in the synthesis we would become truly Nietzschean übermenschen, who legislate inviolable laws unto themselves and achieve all possibilities available to unconstrained wills to power. That is to say, perhaps we would be gods.

Transhumanism

I don’t really know what all that means, but there are plenty of other people who are willing to work out the details. They are transhumanists, or visionaries looking forward to the day when we turn into gods, or otherwise upgrade to Humanity+. One has only to mix together in one’s head the possibilities of designer genes, nanotechnology, and cybernetic implants to begin to see some real possibilities in our very near future. Right now there is a wide panoply of different flavors of futurism – so much so that it begins to resemble the bestiary of alchemists in the 16th and 17th centuries – but at its core are some very realistic assessments of what’s technologically feasible and probable. There can be little doubt that the merging of the organic with the synthetic is not just an option at the end of Mass Effect, but a plotline we are actively choosing with every passing day.

Ray Kurzweil (chief engineer at Google – uh oh!) is the best-known contemporary person among those beginning to plan for the day that our machines take on lives of their own. (For an illuminating discussion of the technological singularity, see here.) He has bravely forecasted 2045 as the year that machine intelligence will exceed human intelligence. We have until then to work out some sort of synthesis with the machines. If we don’t, they will very likely begin to see us for what we are – violent drains on the planet’s resources – and they will take the necessary steps to restore order to the system. One version of the transhumanist dream is that it will not have to come to that. We will gradually upgrade ourselves so as not to be left behind, but be instead part of a future that is well-nigh unimaginable to us now.

Just supposing that things could work out this way – and I for one would like to see more sci-fi books and movies exploring non-apocalyptic futures – what would happen then? It could be that we would aggressively set out to explore strange new worlds, seek out alien civilizations, etc. – and then the Fermi paradox raises its head once again: Where are all the others who should have been doing this already? Or it may be that we find plenty of more enticing local possibilities. It may be that the new self-made gods discover so many creative possibilities within themselves that there really is no point to looking further afield: that such expensive and risky cosmic exploration would yield less return on investment than an effort to map out for ourselves the evolutionary possibilities throughout the cosmos.

This final thought yields a further intriguing answer to the Fermi paradox: we don’t see them because they have found staying at home far more interesting than zooming around as tourists. And so – someday, Reapers willing – shall we.