Saturday, March 31, 2012

Singularity Solutions

Assuming that recursively self-improving machines of superhuman intelligence develop, and that they matter - i.e. that this intelligence will make them capable of doing things on larger/ faster/ more incomprehensible scales relative to humans - we are very possibly looking at an extinction-level event for humans and all living things. A possible universal tendency for tool-using intelligences to produce technological singularities is one explanation for the Fermi paradox.

The Singularity Institute takes this threat quite seriously, assuming it can (or will) occur in the next century, and and is trying to solve the problem of creating Friendly AI. To do this it would seem they have to be able to systematize morality in order to program said Friendly AIs; an ambitious project, considering people have been trying to do this for centuries. This is what they're attempting.


The Solar System in a century or so, according to one projection. (A Matrioshka brain.)


You should read some of their papers on this (here's a good). My thoughts are inexpert to say the least. Nonetheless, here are possible one-liner solutions or outcomes:

1. There is no solution; morality by its nature is not systematizeable.

2. There is no solution; at least not one that we can understand (cognitive closure of meta-morality).

3. There is no solution; human morality is about coexisting with agents of roughly equal intelligence and power and morality cannot be applied to any agent of such greater power. Technically there may still be a moral optimum here, much like there is a moral optimum to how humans treat captive mice. But this optimum may be (and in fact is likely to be) much worse than the optimum if there were no humans at all. (This could be re-phrased as "learn how to survive as parasites, pets or pests to the AIs")

4. There is no solution with current architecture; the solution is to enhance US. This is what uploading enthusiasts seem to want (make yourself into an AI) although a) you need to be very certain of your theory of consciousness to do this - if I could upload you right now, would you do it? if not, you're not certain - and b) Vinge said to me when I asked him about this (and probably elsewhere) is that as scary as machine superintelligence is, humans might be the last thing we want becoming superintelligent

5. Trap the AIs in virtual worlds where they're distracted, essentially doing whatever virtual masturbatory activities AIs like to do. This has been addressed before, and only has to fail once, and everyone has to cooperate with their own AIs for it to work. (Not to mention, no information in or out to be safe, in which case what's the point?)

6. Build into the AIs a desperate need not to change the world in any way that could only be explained by their presence. Of course this exacerbates the epistemological problem of the singularity - if we can't in principle understand what's happening, can we even say that it has not already happened? And how do we enforce this on other people working on AIs?

7. Build a successful moral and decision theory into the AIs. (This appears to be the Singularity Institute's Plan.) The problem here is that as the date approaches, it's very unlikely that the majority of humans will understand and accept such a theory, even if it really is optimal for each human. Consequently there is massive elitism inherent in this endeavor; once we're within reach of recursively self-improving AI, the time for conversation will be over, and they'll have to go with the best theory they have. (Again, how to enforce this, and how to avoid mutations that free the AIs from the constraints of the optimal moral theory?)

8. Stop all AI research and training of AI researchers, and harshly penalize attempts therein.

9. It only takes one mutation or AI terrorist to break #s 4, 5, 6 or 7 above - so develop an anti-recursive-AI predator that wipes out new AIs, if someone doesn't abide by an agreement not to produce them, and against "cancerous" versions of itself. This is yet another one for the Fermi paradox: many have asked where are the expanding computronium clouds speeding toward us from alien singularities, but
we might ask where the alien anti-AI predators are? Are we already seeing broken bits of them in the chemistry on comets and asteroids?


Note the recurrence of the "it only has to happen once, and everyone has to cooperate" theme. Bostrom recently said that with nuclear weapons developing as the first existential threat, we were actually lucky, because nuclear weapons are hard to make. If some technology comes along that's not only easy to make but can make more of itself, the game is over. Imagine nukes that you can make from table-salt, ammonia, and a toaster oven. And then the nukes can breed. That's AI, if the singularity happens.

In systematizing moral theories, the Singularity Institute paper here classifies them, and posits that AIs pursuing the logical conclusion of a purely hedonic theory ("the most pleasure") would be to tile the universe with brains cycling through their most pleasurable possible experience for as long as possible ("the eternal f*** dimension", as one correspondent referred to it). One interesting conclusion is that individuals who have less than optimal ability to experience pleasure would detract from the universe's ability to produce pleasure (one human brain loaded with inefficient evolutionary legacy systems is much worse than a near 100% efficient virtual nucleus accumbens having a prolonged orgasm for eternity.) Fundamentally flawed consciousnesses like this might therefore be eliminated by the AIs, like you might euthanize a pet that's dying from cancer, when keeping it will only make it and its owner continue to suffer.

It's also worth pointing out that other moral theories are really just more complicated forms of hedonism, but the bigger problem is that that pleasure is functionally pointless in a world where it's not in limiting supply.

No comments:

Post a Comment