Saturday, March 9, 2019

STOP METI

tl;dr if you worry about the singularity or bio-terrorism as an existential threat, you should help the effort to stop efforts to announce our presence to aliens. This should be a priority for the rationalist and effective altruism community.

Many people are familiar with the SETI project, Searching for Extraterrestrial Intelligence. METI stands for Messaging Extraterrestrial Intelligence. Stephen Hawking, Elon Musk, and Freeman Dyson believe this is incredibly stupid and dangerous.

Of the 13 (known) attempts to deliberately signal another star so far, the earliest that any response could be received is 2036. The earliest that any response could be received from a sun-like star with planets in the habitable zone is 2085. Additional attempts are almost certain to be made in the next few years.

Most scientists recognize how important - in fact, world-shattering - contact with aliens would be, and there's actually a protocol for what scientists should do if a message is received. But these attempts are being made by individuals or small groups, with no oversight, often with really stupid justifications. One was an art project; this one invited kids to compose the message.

The argument against deliberate messaging is that even here on Earth, contact between members of the same species with differing technology was catastrophic for not just the humans on one side, but the ecosystems. A visit from aliens with enough technology to detect us or visit would therefore likely be devastating, even if they don't have malicious intentions. Once we're detected, we can never be un-detected.

The arguments for METI are laughable, and best thought of in terms of a native American on the shores of the Atlantic, talking about building signal fires to bring the Europeans over even sooner. Their best arguments are:
  • Aliens might already have noticed us anyway. (So why make it more likely?)
  • It's extremely unlikely anyone will get the message (So why do it at all?)
  • They won't come for a long time. (If we discovered a form of energy that would start poisoning our descendants in 10,000 years, would we use it?)
  • If all the other aliens are remaining silent, then by doing the same, we're part of the problem, and we're hypocrites for trying to detect others (given the risk:benefit, that's a trade most of us would be comfortable making.)
  • Aliens who can respond or come here will necessarily be moral beings and won't hurt us. (This one is really absurd, and not only makes assumptions about the intentions of the aliens, it sounds very much like a religious conviction.)

You can see a more thorough treatment of these arguments by a SETI expert in this paper, and the abstract finishes with these sentences: "Arguments in favor of METI are reviewed. It is concluded that METI is unwise, unscientific, potentially catastrophic, and unethical."

You can see some of these arguments being made with a straight face by METI's founder Doug Vakoch in this article. Note that METI is a splinter of SETI, since most of the scientists involved in SETI forbade active communication attempts.

Going forward I'm going to do my best to create awareness in the rationalist community and put priority on this as an existential threat alongside AI. My plan is to contact the people at SETI to see what they're already doing and how others can contribute. It seems like the best approach for now is stopping transmissions by blocking individual projects, but ideally there would actually be a law against this as well as norms that socially punish defectors. To end on an optimistic note: because there is a chokepoint (money and limited time on transmitters which are mostly controlled by universities) this problem is actually much more tractable than avoiding a "bad hard takeoff" of general AI.

No comments: