Sunday, May 29, 2011

Where Will AI Become Dangerous First?

If something like a singularity occurs and is a threat to humans, it is likely to first make an impact in those areas:

- where machines are functioning largely autonomously, and/or

- where their functioning has already been driven by something very like natural selection, usually where there is immediate material incentive for human drivers, and/or

- where machines are already specialized to remotely monitor or harm humans.


Machines will also have an advantage in areas where humans think irrationally due to the heuristics and still-very-dominant pre-rational resposnes left over from our evolutionary past. There are already a number of such examples.


Source: Southwest Research Institute


- Border control automation, particularly along the U.S.-Mexico border. At this point it's largely metaphorical to say that while you're hiking in the border areas you feel like a human being watched remotely by HK's in the Terminator franchise. The degree of metaphorical-ness is decreasing. (Category: specialized to remotely monitor or harm humans)

- Warfare, especially in theaters inaccessible or hostile to humans - i.e. in the air, in the water, in harsh climates, particularly dry or cold ones. (Category: specialized to remotely monitor or harm humans)

- Chatbots, usually trying to draw us in with promises sexual activity or at least live web viewing thereof, humorously described at Collision Detection. Alan Turing didn't see this coming. (Category: immediate material incentive. "The pursuit of purloined credit cards has probably fueled more cutting-edge AI -- and subsequent fraud-detection bots -- than actual academic AI".)

- High frequency trading algorithms (category: largely autonomous)



As an aside, it's interesting that finance has taken so well to automation, but that law is lagging so far behind. Why do we NOT have (for example) a constitutional programming language, so that only a consistent constitution will compile, or an algorithm to automate at least the majority of court cases? This, despite the lawyers' clever attempt to wake Skynet, which lies dreaming in Rlyeh, to eat them first.

As remarked before, a really smart AI will not launch nuclear weapons and send a bunch of metal Halloween skeletons out to get you. It will know we expect existential threats to come from other humans or at least corporeal forms that are similar to the threats we faced in the paleolithic. So hostile AIs will break the stock and commodities market, wreck the crops and the fuel and water infrastructures, all while distracting us with superstitions, pornography, and threats that have scary teeth.

No comments: