Hanson is on record as being un-thrilled about attempts to contact extraterrestrial intelligences. I share his position.
But for some odd reason he thinks that people who fear a possible (technological) Singularity are sadly misguided. This seems on its face badly inconsistent. Aliens from another star system are a threat, but an independent AI more intelligent than humans (and with different goals and values) is not? Puh-leeze. At least David Chalmers has the right idea.
[Note: Hanson has responded in the comments to this post. If you are a regular reader of his blog, you know that I am emphasizing this because it raises my status.]
Subscribe to:
Post Comments (Atom)
2 comments:
You have misunderstood me on both issues. I said that individuals shouldn't be deciding for themselves if we contact aliens, we should decide that together. And there should worry about robots, but it is more useful to focus on sharing institutions and intermingling than sharing values.
Thanks for your response. So don't you think there is still every reason to take a consistent approach to both AIs and aliens, or am I still misunderstanding?
Post a Comment