Imagine that due to some breakthrough in computer science, we can determine (with certainty) that for a certain architecture, there are some designs that will "work"; that is, when activated, become a successful recursively self-improving general AI. Now assume that 13 such designs have already been tried. A tiny fraction to be sure; and yet in no case has there been any discussion before the research team turned each of them on, no input from other researchers, no public comment, and certainly no attention from policy makers of any sort.
The catch is that this architecture is highly iterative, and it has to run for a long time before you find out if it's going to "wake up" - consequently the researchers load the software and hardware into satellites, because the earliest any of them would "wake up" would be 2036. (For our purposes, assume once launched, these satellites are out of reach - like Elon Musk's car.)
I assume the AI safety community would have something to say about this; about people unilaterally turning on instances of this architecture, and placing it out of reach. Unlikely though it is, any one of those could wake up and we could find the Solar System transformed overnight, and not necessarily to humanity's benefit.
Why such an esoteric thought experiment? It's not really a thought experiment. There have already been 13 active attempts so far (that we know about) to signal nearby star systems. Given the constraint of the speed of light, the earliest we could hear back (or meet someone/something) from any of them would be 2036. Much like the satellite-launched AIs, once you send the message, you can't delete it from their inbox. The 1-in-29 million comes from the original Drake equation estimate of 3,500 civilizations in the galaxy, and one hundred billion star systems. Note that this thought experiment assumes every species is confined to one solar system, but if they have interstellar travel (and follow the signal back to the source) then that 1-in-29 million probability would be much higher.
AIs are at least designed by humans, with possible ethical constraints. Aliens able to visit our solar system would not in any way have our interests at heart. If you're in the rationalist community and you're concerned about a technological singularity, you should be very concerned about existential-risk-level-dangerous wildcat attempts to reveal our presence to other solar systems. This is called METI (Messaging Extraterrestrial Intelligence) instead of SETI, and you can read more about stopping it here.
Sunday, July 21, 2019
Subscribe to:
Posts (Atom)