Monday, September 4, 2017

General AI: Computation versus Survival, Superintelligent is Not Omniscient

It is usually assumed that a superintelligent AI would maniacally focus on improving computation. Just to highlight the centrality of computation, a recent paper in the British Interplanetary Society Journal argued that the reason we don't see aliens is they're sleeping, waiting for a time when the universe is cool enough that their computations are more efficient. The alien singularities are waiting until they don't need to be cooled.

The most common concern associated with this line of thinking is that the technological singularity would be bad because the AIs would use all available resources - starting with all matter on Earth, including us - as computational resources. While I think a technological singularity would be catastrophic, I think the reason is eve more mundane.

Of course, this assumes that the AIs in all their power are maximizing computation. I don't think this is questioned nearly enough, and a good bit of the inertia around it stems from the cultural assumptions of the programmers and engineers making the argument. The singularity is thought of as a logical outcome of Moore's law, which concerns exponential growth in computation. It's not clear that this is what an AI would necessarily be maximizing. For our part, humans and other animals maximize a host of confused and often contradictory goals. Of course we remain in this mess because we are not recursively self-modifying. Assuming that AIs with such an ability aren't automatically condemned to wirehead, it's not unreasonable to ask whether there are things to maximize that increasing computations just wouldn't fix.

Replicators whose descendants are present into the future are the result of selection for one thing - making copies - and to the extent that extra computation can improve that, then the AIs present in the future will be selecting for computation that helps them reproduce and sustain themselves. But even a superintelligent AI is not an omniscient AI, and cannot see infinitely into the future and understand ahead of time the impact of all its actions in maximizing its survival OR computation. My strong suspicion is that a hard takeoff will likely be an apocalyptic gray goo explosion, much more thorough and faster than the mass extinctions so far in the much more comparatively mildly ecocidal anthropocene, and that furthermore this is a strong candidate for the Great Filter and the Fermi paradox. That is to say, we're more likely to find the simple but fecund survivors of such an event as something that looks like post-singularity AI-algae (or free-roaming AI "cancer") than alien AIs that are interested in philosophy.

No comments: