Attirbuted to Enrico Fermi as a back-of-the-envelope astrobiological philosphy
exercise, Fermi’s paradox is a simply put question: where is everybody?
In other words, if life is a truly common phenomenon in the universe, then
the probability of a civilization solving the problem of interstellar travel
should be pretty high, and the effects of such a civilization on the galaxy
should be extremely evident to an observer (think entire stars being instantly
harvested for power).
However, the SETI remains unsuccessful, hence where is everyone?
A quick search on Scholar
will give you literally thousands of reasons not to panic (or maybe do the
opposite, depending on how much you like aliens), and provide you with many
logical reasons why the presence of ETI could go unnoticed, leaving us in our
quiet and lonely neighborhood.
Having sorted that out, we can safely go back to our Twitter feeds to discuss
serious business
and Elon Musk’s latest views on AI.
We can also go back to our comfy evening reads, which in my case mean Hofstader’s
GEB (for the sixth time or so) and Nick Bostrom’s Superintelligence,
while listening to the sound of the falling rain.
And then, when the words are stirring inside your head, and the rain has
flushed the world clean, and the only sound you can hear is the quiet whir
of the GPU fan, while it’s training that 10-layer net that you’ve been recently
working on; only then, you might find yourself asking the question: where the hell is superintelligence?
That’s a reasonable question, isn’t it? Just look at all the informed opinions
of the better, wiser people than me that I cited above on the subject.
They may disagree on its nature, but no one (no one) disagrees that AGI will
have an almost infinite growth rate.
Go back to the second sentence of this post, and replace “interstellar travel”
with “artificial intelligence” to have an idea of what that may look like.
And we’re not talking of a simple Kardashev scale boost; a superintelligence would
be well aware of its physical limitations, so we would likely be looking at an
end-of-the-galaxy scenario, with all available matter being converted to
computing infrastructure, à la Charles Stross.
A phenomenon so powerful that its mere exisistence would change the scale of reference for every other phenomena in the universe, something so good at self imporving that it would only be limited by physical laws.
If the probability of life in the universe is high, then so is the probability
of a civilization developing a superintelligence, with all its extremely evident
effects.
So where is it?
So far, I only talked about a catastrophic scenario following the creation of a superintelligence, but we have to consider the postive and neutral outcomes, before denying the existence of a super AI.
If the effects of a superintelligence on its universe were not so devastating,
if we look at the brightest end of the spectrum of possibility, think about the
advances that such an entity would bring to its creators. All the technologically
motivated solutions to the Fermi paradox, at that point, would be null, leaving
us with a whole lot less formal analysis, and a whole lot more speculation on
alien sociology and superintelligent motives.
What reason could a civilization with a superintelligence in its toolbox have
to not reach out of its planet?
Moreover, we couldn’t still exclude a catastrophic end of the galaxy, if the computational necessities of the AI required it to favor an absolute good (its existence) to a relative one (our existence). Therefore, if we allow for a truly good superintelligence to exist somewhere in the universe right now, we have to imagine natural, moral, or logical impediments that prevent it from communicating outwards and spreading infinite progress.
From another perspective, even if talking about likelihood in this context
is a serious gnoseological gamble, it seems that the neutral scenarios would
likely be the less noticeable: superintelligence is there, but it has no drive
to manifest itself.
That could either be for a lack of necessity (it wouldn’t need energy, or matter,
or information), or a lack of advantage (it wouldn’t want to reveal its presence
to avoid danger, or to avoid pointless expenses of limited resources), and it
would be a fairly easy and rational explanation to the superintelligence paradox.
A system in such a perfect equilibrium would probably exist in a super-state,
free from the unforgiving grip of entropy and eternally undetected (short of
using another superintelligence, at which point the paradox would already be
solved).
I stop here with the examples, because it’s out of my capabilities to summarize all possible scenarios, especially when we consider that the Fermi paradox has inspired fifty years of debate. And if we think about this debate, we see that it extends back in the ages, that “alien civilization”, “AI”, or “God” have been used interchangeably without changing the essence of the discourse: why, if they exist, are they not manifest?
As hard as we try to rationalize this question, we perfectly know that there are only two possible outcomes, that we are looking for a black swan that we will either find, or keep looking for. At the same time, we’ve learned very well how to coexist with this uncertainty, because we only need the possibility of a forced ignorance, in order to accept an ignorance undesired.
And so, as men of rational intellects we can be crushed by the lack of knowledge, or be incessantly driven by it, knowing that every second spent in the search is a second that could have only be wasted otherwise. And those who are crushed, may turn to faith, and find peace in knowing that their happiness resides with the black swan, where it can’t be touched by mortal sight.
In the end, while telling us a lot about human condition, this thought leaves
us back in our quiet neighborhood of probable but unverifiable truths.
However, when considering the practically null amout of time that constitutes our
lives, a question may come to one’s mind: is it closer to human nature to
think of a God, or to seek one out?