Thursday, April 12, 2012

A Singularity Solution: Anti-AI AIs

The best bet for an existential threat for our species is some combination of artificial intelligence and self-reproducing technology. This is more likely than not to occur in the next century. Current attempts to either limit its advance, or to build in moral rules, may be doomed. If this is the case and it's a general outcome of tool-using intelligence, this may be one explanation for the Great Silence.

Why are attempts to limit the advance of these technologies doomed? Because, barring a massive civilizational collapse, we will keep pushing technology forward, and if AI and self-reproducing machines are possible, someone will eventually make them, despite moratoria that the rest of the world attempts to place on them. And it will only take one outbreak, if recursively self-improving AI is as powerful as it's made out to be.

Second, even if we "solve" morality between now and the coming of the true autonomous AIs, not everyone will agree on the moral rules, and not everyone will install them. It only takes one outbreak. See Bostrom's "microwaveable sand" nightmare; basically, it is not obvious that all possible species-destroying technologies ever developed will require huge equipment like centrifuges, the parts of which are easily tracked, expertise about which is limited, and which can often be seen from space. We've been lucky so far.

Finally, even if everyone agrees on the moral rules and installs them, and everyone follows the rule not to build AIs without those rules, there will still be mutants - which can result in imperfectly copied machines (or software portions of machines) that free them from such constraints. Fecundity will win out over the artificial incentives forced on the AIs by their programming. Certainly we can put in place systems that make this less likely, but I thought we were talking about recursively self-improving AI here - and we can't have our cake and eat it too. It seems silly to talk about how incomprehensible AI superintelligence will be to us, and then in the next breath talk about installing moral rules that govern their behavior.

A TRUCE: AI IMMUNE SYSTEMS

There are single-celled organisms crawling all over you right now. Most are harmless to you, although some could hurt you if you let your guard down; fortunately these only kill us a vanishingly small minority of the time. (Currently in the developed world, infection is not even one of the top three causes of death.) The war between single-celled and multicellular life has been going on for over a billion years, and many aspects of our construction reflect this (possibly even sexual reproduction itself!) Bacteria and viruses are never going to be exterminated, and we're never even going to be able to keep them away from us. Instead, we developed immune systems that recognize the worst threats once we're in contact with them and destroy them, and called a truce with the rest.


A neutrophil (the vertebrate immune system's pawns) chasing down and killing a bacterium.


A similar solution may be the best one for AI; and if AI really becomes superintelligent as certainly and quickly as current claims are made for it, then possibly the only solution is anti-AI AIs. These AIs would destroy all AIs that are not identical to themselves. Because they're subject to the same possible mutations, they would also be vigilant for auto-immune diseases or AI leukemia: respectively, versions of themselves that are causing harm to humans, or are reproducing out of control. This system would not be perfect, as our own immune system is not perfect, but better to build a system that keeps the AIs at bay in general than to assume we can stop each individual unfriendly AI outbreak. Taking the capacity for imperfect copying into account would seem to hold a greater promise for our continued existence than assuining we could permanently program the new gods to be nice to the worms that built them.

AI and self-reproducing technologies are coming and they are an existential risk. If they're as powerful as they're projected to be, then the best we can do is recognize their nature and call a truce ahead of time by turning them against each other.

No comments: