Thursday, March 31, 2011

The Singularity and the Fermi Paradox

The idea that there will be a technological singularity relies on the development of self-replicating technology that is able to improve its replication and anticipation of the future (i.e. it is self improving). Those who argue for a singularity would seem to think there is a high likelihood of this happening, assuming humans continue to improve technology.

Therefore, if you believe that technology-using intelligence can evolve elsewhere in the universe, you should also believe that singularities have very probably already occurred elsewhere in the universe, barring an argument that the singularity is somehow predicated on provincial aspects of human technology.

If that is the case, it is very likely that evidence of any non-entropy-driven replicators (i.e. "life") from outside the solar system will be from an alien singularity, rather than the klugey "naturally" evolved aliens themselves.

This argument parallels Bostrom's simulation argument. A general form for arguments of this sort is

A) If the concept of revolutionary technology/event X is coherent,

B) And if humans are not the first technology-using intelligence to evolve,

C) then X has probably already occurred,

D) and also the universe as we already experience it is likely to exhibit characteristics determined by X.


It's worth asking how a very post-singularity star system would look from 50 LY away. Of course by asking about star systems, I'm engaging in matter chauvinism, because I assume matter is required for doing things like computation. Perhaps there are better substrates where we should be looking.

For those who think a human singularity is inevitable, but agree that we have not seen evidence of alien singularities, if the assumptions above are valid, then we should start rephrasing solutions to the Fermi paradox in terms of the singularity:

1) Singularity always equals cancer: when systems of self-organizing matter can move in giant steps rather than tiny incremental steps, their bad rules or inefficiencies matter much more, so they behave unsustainably and destroy themselves (in LessWrong parlance, they become paperclip maximizers.)
This is just a singulatarian instantiation of Fermi's concern that technological civilizations would destroy themselves, making Drake's L factor a major attrition factor.

2) There are signs are all around us but we don't recognize them (or, we just haven't looked hard enough.) We're not so bright. Do we know what a singularity would look like 25 million years after it happened? Don't discount this one. It's my explanation of why we haven't seen found anything yet.

3) We're in a backwater. If we look far enough away, or wait long enough, we'll see them.

4) Singularities conceal themselves. The ones that don't get destroyed.

No comments: