Sunday, August 30, 2020

New Approaches on What the Fermi Paradox Means for the Future of Humanity

I was lucky to attend a video lecture by James Miller, economist at Smith College, facilitated by Joshua Fox. Thanks for having this event! I contacted James to let him know I would be posting this and to let him proofread my recapitulation of his argument so as to avoid mis-paraphrasing him; my thanks to him for taking the time to correct me on several points. Of course any errors are mine.

Much of this is familiar terrain for those of us who spend our time considering X-risk and the Fermi paradox. Miller's thesis is that we are at a critically important point in human history, a window where we think that in the near future we can start colonizing the galaxy (the year 2614 at earliest, by this calculation) but at the same time where we are smart enough to destroy ourselves. Since it is not obvious that the galaxy has already been colonized by other civilizations, there may be a Great Filter stopping this from happening. Miller uses the analogy of a person about to climb a mountain, believing that everyone else who has attempted it has died in the process.

Several challenges were discussed by attendees. (If you attended the lecture and want to claim credit for your question, please comment below, thanks.)
  1. It's too early to say there are no civilizations; it may not be so easy to detect them or rule them out. We're still discovering metazoans in Manhattan so it seems a little early to rule out von Neumann probes on low gravity bodies in the solar system. We've barely begun to catalog the fauna of our own ocean floors. We could not detect a twin Earth emitting the same radio energy (the C-index), even if it was orbiting Alpha Centauri. Miller points out that even if there were only a few civilizations in the Milky Way preceding us, "the galaxy is older than it is big", and these earlier civilizations could have colonized it already.

  2. He made the point that the things which prove advantageous in the midst of evolving on a single planet might have no such advantages in terms of galactic colonization. Very true; I would argue that we are much more likely to find alien artifacts, than the aliens themselves, as all of us meat-creatures might be stuck on our planets while our machines colonize the galaxy. To that end, (my point) it's entirely plausible that the Solar System could be littered with space probes and we haven't found any yet, or did, and just didn't know what we were looking at.

  3. I would therefore extend Miller's analogy like this. Only in the process of climbing the mountain, does our climber develop wilderness skills and begin to see things that resemble his own boot tracks, etc. and finally as he approaches the summit realizes that lots of people have climbed it, come down the other side, and their descendants have built large villages which due to his previous ignorance he has not been able to locate. (Or, maybe just some of their livestock, trained birds-of-prey, etc. have made it.)

  4. Active attempts to bring ourselves to the attention of aliens have occurred (METI) and been roundly criticized. Miller notes that the risk of extinction from aliens over the next few centuries is lower than eg bio-terrorism or an intelligence singularity. True; but we still may be making life more difficult for our descendants. Related to this, he proposes an ingenious experiment that for a month we should shout our heads off electromagnetically, and see if there is any strange activity. While I agree it's unlikely we'll get invaded next week, I still think the risk:benefit does not work out and there are just too many unknowns, and we may be screwing our distant descendants. Miller suggested that enforcing a moratorium on METI-like activities is probably impossible.

  5. He argues that technological singularities of the paperclip maximizer variety are unlikely to be a major contributor to the Great Filter, because we would be able to see the boundary of it as it expanded (unless it was doing so at light speed.) My concern with this is that, while an AGI might be much smarter than its creators, it is still not omniscient, and the impact of its actions could in principle still outstrip its ability to predict that impact. This is the story behind the rise of human intelligence and the sixth great extinction that we're living through, but has happened in pulses of endogenous extinctions throughout Earth's history (the rise of superpredators every fifty million years or so, the Oxygen Catastrophe). The lesson of evolution here on Earth is that the smarter things are, the faster their behavioral plasticity "catches up with them" in exactly these sorts of disasters, so to suppose that alien paperclip maximizers are immune to this problem is to argue that a qualitative change in ecological dynamics has occurred.

  6. There were two (possibly unappreciated) related questions asked: one about civilization perhaps being bad for sustaining civilization (witness declining birth rates in the developed world) and another that intelligences might prefer virtual reality - involution - to expanding into space. Miller points out the passive version of the "baseball bat" problem: you can live in heaven, but if a bad guy comes and bashes your server with a club and you as you sleep in your VR pod, that's the end of it. (Related: dynamic complex systems like minds, in principle, tend to drift toward delusion and suffer inherent cyclic crises.) It's a thesis for someone in psychology or a related field to note whether there is causation or just correlation between the increasingly encompassing virtual reality-like entertainments available in the developing world, and declining birth rates.

  7. One questioner asked about the distinction between intelligence and civilization - humans have had a "civilization" only since agriculture. This was a really original line of thought. Therefore, there could be many alien intelligences, but few or no civilizations. One solution for humans avoiding the Great Filter would be to abandon civilization and go back to hunting-gathering - not directly suggested, but this is the only implication of such an argument I could think of. The extreme number of assumptions built in to discussion of alien civilizations should always be pointed out - civilization is something that collections of human nervous systems do, and it is not clear it is a necessary consequence of intelligence. (As a physician I ask: do we assume the aliens will have similar EKG waveforms and liver enzymes as us? No, because that's ridiculous. So we do we assume that the even more complex activity of another organ, that we don't even share with other animals on this planet, is automatically going to be meaningfully similar?)


There's also a psychological point to be made about "big picture" arguments (the singularity, the Fermi paradox, the simulation argument, etc.) They have a tendency to converge on either prophetic religion-like conclusions (e.g. the singularity as the rapture for nerds) or Lovecraft (the estivation hypothesis, which was mentioned in a question and made me think about this.) When we talk about these things, there are many many unknowns. In such discussions, I think there is a tendency for the resulting arguments to resemble the internal contours of the human mind, more than any future events in the actual external world; hence their regression to religion-like conclusions. This does not mean such an argument must be incorrect, but it should make us suspicious when a big-picture argument hews too close to our "ontological test pattern. "

Consider in contrast cosmologists' models of the distant future of the universe, which concern physical objects which we can now observe and characterize, using rigorous mathematical rules. These models often seem boring, meaningless, difficult to understand, and unsatisfying. This is exactly how we should expect most models will seem of things outside our own and our ancestors' experiences, or beyond the scale of time and space to which we are accustomed and which we are built to perceive; the further outside their experience, the moreso. This occurred to me when we were discussing the estivation hypothesis, though overall Miller's arguments do not set off many alarm bells for this quick-and-very-dirty heuristic.

Lazerhawk - Redline, 2013

Origin of Life in RNA Computing: Independent Suggestion of Organic von Neumann Probes


Previously I had advanced the idea that, if intelligence has arisen elsewhere in the galaxy, it is likely to have colonized the galaxy in some form, and therefore we are more likely to find their artifacts here in our solar system than hear or understand their EM signals.  Specifically I argue that von Neumann probes are more likely to be entities of organic chemistry we find on low gravity bodies, that as natural selection is universal law that such entities - even if dispatched to gather information - would eventually be selected for fecundity; that is, they would inevitably become cancerous.  If the water that seeded the early Earth contained such entities, whether or not they were intact, the tumor detritis of these cancerous von Neumann probes would provide the template for life on ancient Earth.  

We have not nearly approached the amount of solar system exploration, or elaborated an abstract theory of how to recognize life or its artifacts, to be able to say we have absence of evidence.  Indeed we find nucleobases on asteroids, though so far we have no evidence so far that they originated from processes beyond the natural ones we are aware of.  

In a new paper, Hessameddin Akhlaghpour makes the observation that while the RNA information processing behavior of life on Earth is not Turing complete, with some additional (not implausible) molecular machinery, it would be.  He then argues that life originated with such a molecular machine and we have not yet found it.  (H/T Marginal Revolution)

Akhlaghpour H.  A Theory of Natural Universal Computation Through RNA.  arXiv:2008.08814