Saturday, March 31, 2012

Singularity Solutions

Assuming that recursively self-improving machines of superhuman intelligence develop, and that they matter - i.e. that this intelligence will make them capable of doing things on larger/ faster/ more incomprehensible scales relative to humans - we are very possibly looking at an extinction-level event for humans and all living things. A possible universal tendency for tool-using intelligences to produce technological singularities is one explanation for the Fermi paradox.

The Singularity Institute takes this threat quite seriously, assuming it can (or will) occur in the next century, and and is trying to solve the problem of creating Friendly AI. To do this it would seem they have to be able to systematize morality in order to program said Friendly AIs; an ambitious project, considering people have been trying to do this for centuries. This is what they're attempting.


The Solar System in a century or so, according to one projection. (A Matrioshka brain.)


You should read some of their papers on this (here's a good). My thoughts are inexpert to say the least. Nonetheless, here are possible one-liner solutions or outcomes:

1. There is no solution; morality by its nature is not systematizeable.

2. There is no solution; at least not one that we can understand (cognitive closure of meta-morality).

3. There is no solution; human morality is about coexisting with agents of roughly equal intelligence and power and morality cannot be applied to any agent of such greater power. Technically there may still be a moral optimum here, much like there is a moral optimum to how humans treat captive mice. But this optimum may be (and in fact is likely to be) much worse than the optimum if there were no humans at all. (This could be re-phrased as "learn how to survive as parasites, pets or pests to the AIs")

4. There is no solution with current architecture; the solution is to enhance US. This is what uploading enthusiasts seem to want (make yourself into an AI) although a) you need to be very certain of your theory of consciousness to do this - if I could upload you right now, would you do it? if not, you're not certain - and b) Vinge said to me when I asked him about this (and probably elsewhere) is that as scary as machine superintelligence is, humans might be the last thing we want becoming superintelligent

5. Trap the AIs in virtual worlds where they're distracted, essentially doing whatever virtual masturbatory activities AIs like to do. This has been addressed before, and only has to fail once, and everyone has to cooperate with their own AIs for it to work. (Not to mention, no information in or out to be safe, in which case what's the point?)

6. Build into the AIs a desperate need not to change the world in any way that could only be explained by their presence. Of course this exacerbates the epistemological problem of the singularity - if we can't in principle understand what's happening, can we even say that it has not already happened? And how do we enforce this on other people working on AIs?

7. Build a successful moral and decision theory into the AIs. (This appears to be the Singularity Institute's Plan.) The problem here is that as the date approaches, it's very unlikely that the majority of humans will understand and accept such a theory, even if it really is optimal for each human. Consequently there is massive elitism inherent in this endeavor; once we're within reach of recursively self-improving AI, the time for conversation will be over, and they'll have to go with the best theory they have. (Again, how to enforce this, and how to avoid mutations that free the AIs from the constraints of the optimal moral theory?)

8. Stop all AI research and training of AI researchers, and harshly penalize attempts therein.

9. It only takes one mutation or AI terrorist to break #s 4, 5, 6 or 7 above - so develop an anti-recursive-AI predator that wipes out new AIs, if someone doesn't abide by an agreement not to produce them, and against "cancerous" versions of itself. This is yet another one for the Fermi paradox: many have asked where are the expanding computronium clouds speeding toward us from alien singularities, but
we might ask where the alien anti-AI predators are? Are we already seeing broken bits of them in the chemistry on comets and asteroids?


Note the recurrence of the "it only has to happen once, and everyone has to cooperate" theme. Bostrom recently said that with nuclear weapons developing as the first existential threat, we were actually lucky, because nuclear weapons are hard to make. If some technology comes along that's not only easy to make but can make more of itself, the game is over. Imagine nukes that you can make from table-salt, ammonia, and a toaster oven. And then the nukes can breed. That's AI, if the singularity happens.

In systematizing moral theories, the Singularity Institute paper here classifies them, and posits that AIs pursuing the logical conclusion of a purely hedonic theory ("the most pleasure") would be to tile the universe with brains cycling through their most pleasurable possible experience for as long as possible ("the eternal f*** dimension", as one correspondent referred to it). One interesting conclusion is that individuals who have less than optimal ability to experience pleasure would detract from the universe's ability to produce pleasure (one human brain loaded with inefficient evolutionary legacy systems is much worse than a near 100% efficient virtual nucleus accumbens having a prolonged orgasm for eternity.) Fundamentally flawed consciousnesses like this might therefore be eliminated by the AIs, like you might euthanize a pet that's dying from cancer, when keeping it will only make it and its owner continue to suffer.

It's also worth pointing out that other moral theories are really just more complicated forms of hedonism, but the bigger problem is that that pleasure is functionally pointless in a world where it's not in limiting supply.

Everything You Believe, It's Just Because You're Afraid of Dying

Here's a piece that's a bit Freudian in the broad sense of explaining grand sweeps of human action and belief in terms of obscure dark fears: specifically, that all our worldviews and cultures are attempts at achieving immortality. On the whole the article is a little silly, but it does make an interesting point: that the cognitive foibles that afflict humans do not spare atheists (true enough). In the context of discussing immortality, this immediately makes one think of cryopreservation, which I've written about before. And the question there is: are cryopreservationists usually non-religious? The few serious (i.e. pre-paid) ones I know are in fact atheists.

Friday, March 30, 2012

Mirai Kawashima of Sigh

An interview with Mirai Kawashima of Sigh, thanks to reader Stan. Mirai is so earnest and unintentionally funny that he's like the exact opposite of Jeff Walker.

Mikannibal should have a fight with the other hot PhD chick in death metal, Angela Gossow of Archenemy.

Tuesday, March 27, 2012

An Optimistic Turn in Science Fiction

As the Hunger Games makes its splash, a lot of posts have been making the rounds about the high proportion of dystopias being churned out in science fiction recently. Utopias can be hard to write because it's harder to find conflict, although one strategy is to look outside the utopia as in Banks's Culture series. In the other corner, dystopias are incoherent, disjointed messes - they're easy, and the worse they are, the more boring they tend to be. Sometimes they frankly seem to be a kind of porn (see Paul Auster's Country of Last Things). Consequently I'm thrilled to see the recent calls-to-arms and manifestos by Annalee Newitz, Neal Stephenson,
and Sarah Hoyt. (As an aside, you might think it a little precious that the writer of Snowcrash, which depics an unpleasant post-rational kind of world, would complain about dystopias - much like the creator of Beavis and Butthead complaining in the excellent Idiocracy about the dumbing-down of America - but Snowcrash was a criticism of a genre more than a culture.)


Old-school techno-optimism. I was into this before it was cool.


In his pointer to Hoyt's article, Marshall Maresca recognizes that her manifesto is a list of thou-shalts rather than thou-shalt-nots (though this makes them no less effective as criticisms), and one of the shalts applies more broadly to fiction in general: if you're too opaque, and if feels too much like a chore to read it, you're doing it wrong.


A lot of the "darkness" in SF films was inspired by Blade Runner which admittedly is one of my favorite movies ever. In part, it was supposed to be SF film noir, which wasn't dystopian, it just had a certain cinematic tone. In truth, Blade Runner's 2019 Los Angeles isn't actually that terrible. Okay, it's densely populated, and dark and rainy all the time, and there's a lot of Asian people and culture and technology everywhere, so there must have been some strange shift that turned L.A. all year into San Francisco in February. ZING! Note that they still have flying cars!


Hence, the increasing unapologetic irritation with the more attention-needy members of the litfic canon like Faulkner (sorry, the Sound and the Fury is just damn annoying) and especially Joyce. Gene Wolfe sometimes spends too much time at the incomprehensibility frontier although he clearly wants you to get it; Joyce himself admitted that this wasn't true for him.

Saturday, March 24, 2012

Tuesday, March 20, 2012

Me, Posting to Craigslist from a Parallel Universe

Where I learned to play metal guitar for real.

Everything about that post screams "role model".

What's funny about that?

What People Often Don't Like About the Fantasy Genre

Writer Marshall Maresca asks, "Why are we stuck in the Middle Ages"? As someone writing a fantasy series, this is a good thing to be asking. This post is the expansion of a comment at his original article.

This is my own complaint about the fantasy genre, and at least in adulthood, the main reason I can't take most fantasy books. For all practical purposes every zombie novel and movie is the same, and to a large extent most fantasy novels are too. Why? Because the former are about crowds of hungry stupid people with strong constitutions chasing protagonists in various cities. (I've seen this in the real world, in Amsterdam; it's called "When the coffee shops close".) Most fantasy novels seem to be transparently the early Middle Ages, with the names of the countries changed, and the reports of miracles and magic taken literally (but the Christian language and symbolism taken out), complete with barbarians on the borders that the authors must realize look a little too much like Saracens sometimes. Often we even have the looming shadow of a fallen empire. The genre's inventor was a medievalist, so big surprise there! In Maresca's post, reader Daniel jumps in to slice and dice the setting's level of technology and figure out the closest real-historical parallel, but this would seem to be a short-lived amusement. Maybe if the writer is making a point about the development of politics or technology this would be more worthwhile, but usually it seems to be a mix-and-match costume put on top of the story-line. Imagine Celts with biotechnology! Imagine curiously Charlemagne-like knights with laserguns! Why!


Above: the latest genre: 1971 Indiana fiction. Indiana-in-1971-ist K.S.S. Klutzenmeyer turned his lifelong interest in the history of his home state into fiction. Alert readers may notice similarities to 1971 Indiana, except the people (called "Khoosiers") have green tiger-striped skin and ride pterodactyls, and are super mystically awesome! Below: the cover art for the first, second and third novels in the series.


As to why medieval European times specifically are more interesting - have there been Inca scholars that tried to do the same and their stories didn't spawn a genre? - I don't know, although to be exciting to us moderns I think you need a) some kind nation-state identity so you can engage the tribal loyalty circuit, b) 1:1 combat, c) recognizable values that you think were carried by people who you think basically looked like you (even if they weren't) and d) it's far enough away and long enough ago that the endemic disease and violence in these times and places can be ignored (hence, no fantasy in modern sub-Saharan Africa, because it's hard to escape the realization that it's just miserable.) All of these suggest thought experiments in genre-bending. Sure, we have pseudo-Sumerian work (does Conan count?) and even paleolithic fantasy, but not whole genres. Do we have "honest" Christian fantasy? C.S. Lewis, like most fantasy writers, changes the names. Why not write about mythical Christian kingdoms, using the power of Christ to blow away enemies? (This may seem distasteful to modern Christians, although the cynical among us might say that's a way to avoid thinking about why we can't use Jesus magic, if it ever was used this way.) Or how about a fantasy/Picaresque novel by an escaped slave in 2012 Mauretania? Somewhere around the third robbery and gang-rape readers' attention might start to wander. Or flee.


The Tower of Druaga is supposedly a Sumerian anime fantasy. I mean come on, it looks like they didn't even try.


In the end, it's a damn shame that when someone is writing speculative fiction where they're allowed to bring the setting into play and use it for any authorial purpose they want, and they restrict themselves to early medieval, period. It's as if you win the lottery and instead of building your dreamhouse, you furnish your house with all the prefab Välms and Squerms and Zalfs from Ikea. (Ikea is fine, just not where I'll shop when I win the lottery. And writers have unlimited imagination dollars.)

Worth pointing out: as long as I'm implying a fantasy vs. science fiction opposition it bears mentioning that science fiction stories are also not innocent of this problem; that is to say, writers don't always stop to consider why they're using the setting they're using, or what they're going to do with that setting. My favorite example is Asimov's inversion of racial class status in The Stars Like Dust to explore his country's own race challenges; a lot of social conventions become much more obviously ridiculous when you're presented with a counterfactual. And it's harder to do this in fantasy, because the restriction is greater - when a genre is defined as using one historical period as its setting, I think it's going to get in trouble eventually. (Hence why I love China Mieville's fantasy work.)

All that said, yes there are things that are innately cool about swords, dragons, rayguns and aliens, and sometimes things can just be cool without being allegorical - but we all have our limits as to how long the formulas can remain interesting. One adolescent I talked to was particularly excited about fantasy novels, and made clear, "I don't like unoriginal fantasy novels where it's the same old dragons. I like fantasy novels where dragons come here, to our world!" Clearly this young man has a very low threshold for depth-of-innovation and doesn't mind reading practically the same book many times, although I imagine it will rise as he gets older.

Speaking of Mieville, have you read The Scar? And have you seen the real seasteading proposal that's moving forward? We may have a real floating polity soon, although hopefully it will expand only through voluntary trade and not piracy. If you're aware of Mieville's politics and remember the execution of the officers when the new ship was taken, this real-world instantiation is not without irony. (Mieville's linguistic science fiction book Embassytown reviewed here.)

Friday, March 16, 2012

Geek Comedy Troupe: the Damsels of Dorkington

Find them here. Witnesseth their work, "I Kissed a Nerd":

Tuesday, March 13, 2012

A Satellite Picture of 1851 California

Cross posted to The Late Enlightenment.

"Predating the launch of Sputnik by over a century, President Taylor's task force, consisting of civil engineers and frontiersmen, constructs a rocket in the Californian wilderness, equips its payload with the most powerful camera known to humankind at the time - endowed with revolutionary colour-capturing capacity - and launches it skyward from the slopes of Mount Whitney.* The President's Astro-Physical Expedition (APE) put California's local flora to good use, hollowing out a redwood tree and stuffing it with gunpowder to create a giant firing tube."

Redwoods are pretty cool but they're not that cool. (I'm trying to grow one in my house at the moment, from cloning instead of from cones, which are ironically tiny.) But this post at Strange Maps, featuring a reconstruction of 1851 California by Mark Clark, shows what that redwood-launched camera would have seen:



The Central Valley was much wetter back then - a temperate river valley. Although I imagine it would have been buggy as all get-out. Ever drive through the rice regions north of Sacramento in the summer at dusk? Your windshield looks like somebody covered it with brown mustard. (Delicious.) Also noticeable on this map: still-full Owens Lake (California's own Aral Sea), and the still-green grasslands and coastal marshes of Silicon Valley and Los Angeles.

*If you read that and thought "Well now who's going to drag that whole contraption all the way up to the top of Mt. Whitney", strangely, this detail is not the most unlikely of the whole scenario. In the nineteenth century it used be thought that the highest mountain in the world was Ecuador's Chimborazo volcano (20,565') - and indeed, it is the farthest from the center of the Earth. Substituting distance-from-center is a neat trick to figure out the highest mountain, but it gives the wrong answer, because the Earth spins, and is mostly liquid, so it's slightly oblate (flattens like an M&M), adding a few more miles onto the distance-to-center at the equator - which is why Everest, up in the 30's north latitude, loses this contest to equatorial Chimborazo. (But Everest is still the highest above sea level.) Point being, how did the scientists measure this? By dragging a >1,000 lb.-yet-delicate metal instrument up to the summit of Chimborazo! Dragging a hollowed-out redwood to the top of merely 14,496' Whitney would be a picnic by comparison.


Looking west from the top of Chimborazo at dawn, where the shadow is projected out to the horizon. Look familiar? Picture in the banner of this awesome blog is the same effect from Mt. Hood in Oregon.

A Nifty Thought

"...suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that." - Nick Bostrom



Above: an Iranian enrichment facility.



[Added later, also funny: "You might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature...I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. ]

Sunday, March 11, 2012

Wellfear, Reconsider (2011)

2/3 Silent Civilian, 1/3 As I Lay Dying with keyboards, but maybe a little more higher-register neo-classical riffing. NICE. Youtube homepage here.

Sunday, March 4, 2012

Right Now, HKs Are On Average Killing 6 Humans At a Time

An article on the drone war on the Af-Pak border, and how it's changed the behavior of the Taliban. Of course they move around or avoid sleeping in the same house, but most interesting to me, they often hide the drone casualties to avoid looking weak.


Of note: our drones (which we also use at our own borders) do not have signal lights and bright floods that reveal their position.

Saturday, March 3, 2012

Remember When Dungeons and Dragons Would Make You Kill Yourself?

In a way I'm nostalgic for those kinds of moral panics. An 80s 60 Minutes piece gives it to us straight. (Via Unreasonable Faith.)