Tuesday, February 26, 2013

The 70s Science Fiction Landscape of California

This is cross-posted to my California travel and outdoors/trail running blog, MDK10 Outside.

The Theme Building at LAX, from theinterrobang.com

The same guy (William Pereira) designed all of these buildings. In order roughly from south to north:

San Diego: San Diego Airport, Grossmont Hospital, Scripps Clinic, and Geisel Library at UCSD

Irvine: the entire city more or less, including UC Irvine

Newport Beach: the entire city more or less

LA: USC's campus (he was a professor there), and that weird central building at LAX

San Francisco: SFO, and the TransAmerica Pyramid

Geisel Library at UCSD. The first time I saw this I literally stumbled across it, and I started looking around for Gort and Clatu.

Pereira was a major science fiction fan and intentionally designed things to look futuristic. Talk about life imitating art. (And some of these buildings ended up being used later in science fiction movies - that's the UC Irvine campus at that last link.) That said, a lot of these buildings do look pretty dated; to paraphrase the Simpsons, they look like what they thought the twenty-first century would look like in 1970.* But it's amazing that one person is responsible for so much of the iconic construction of this state, and more amazing that he's not more famous. One of Pereira's students also went on to some fame - Frank Gehry.

The whole state of California is named after a fictitious country in a sixteenth century science fiction novel, so maybe this kind of reification isn't so surprising.

*Ah, you read the footnote for more architecture-bashing! Excellent. Modern architecture in general often gets dated quickly because the ways it tries to be original become inextricably linked to a very narrow era - you don't look at a medieval cathedral and think "Oh my gosh, that's so tacky, it just screams fourteenth century." Another mid-to-late twentieth century American architect was Eero Saarinen, and his best-known works are probably the Arch in St. Louis, the international terminal at JFK, and the terminals at Dulles (you know, the ones that require a custom-made land-crawler as a shuttle. Stupid.) Note that of these, only the Arch has avoided looking dated, at least from outside, probably owing to its basic geometricality. Even Saint Wright suffers from this to some degree. Probably the worst offense in all architecture is here in San Diego, the Salk Institute, perpetrated by Louis Khan. Horrifyingly, every day one can find packs of drooling architecture students visting from Europe and Asia, memorizing this Golgotha of right angles, excited to return home and desecrate their own cities with a similar pile of cinder blocks. Just as with Wright's work and that of other famous architects, the bathrooms in the Salk are awful. (RE Wright, in Falling Water they're bad but in the Beth Shalom synagogue in Philadelphia they're criminal. Tiny, dungeon-like, insufficient for the facility, their function damaged by their size and remoteness - seemingly not an afterthought, but the victims of deliberate malice. The bathroom is the most important room in the building!)

Monday, February 18, 2013

Relativity Engine? Probably Not, But Here It Is Anyway

It's probably science fiction in the service of securing funding (also known as bullshit) rather than science.  Still, here's a story on a Chinese team's claim to have built a microwave relativity engine.

When They Thought They'd Found Aliens, What Did They Actually Do?

When pulsars were discovered, the team of astronomers took very seriously the possibility that they had detected an alien civilization. And when they thought about what should be done in terms of a response, they also took seriously the idea of restraint, that information about our existence could not be recalled once it had been sent, if it turned out the other intelligences were not benevolent. Other concerns of the team involved how best to disseminate the information.

SETI has now established a protocol to disseminate news of such a discovery, which basically breaks down into 1) confirm, and reconfirm, and reconfirm again before you say anything; 2) go through channels; and 3) no one should talk back to them until a public international discussion is held. This may all be a moot point since people have been sending signals in various directions for some time, and a criticism of Frank Drake for doing just this is mentioned in the paper I linked to (references removed for readability):

Such a signal was in fact sent out by Frank Drake in 1974 and Ryle wrote to Drake complaining that it was "very hazardous to reveal our existence and location to the Galaxy; for all we know, any creatures out there might be malevolent - or hungry". Later, it seems that Ryle led an approach by several people to Sir Bernard Lovell of Jodrell Bank fame who then sent a private letter to the International Astronomical Union raising the possibility of malevolent aliens, saying that "I have been asked to seek a discussion in the Executive Committee ... astronomers are involved in the problem of communication with extraterrestrial communities. Transmissions for this purpose are being made .... [ as to whether] the IAU should draw the attention of world governments to a problem which could conceivably be of critical importance" and "whether the astronomical community should take steps to initiate a wider discussion on an international basis of the consequences of success ... I repeat I raise this issue on behalf of a number of distinguished individuals". After consulting Drake, the IAU concluded that no action was needed.

The Law-Giving Machines

This post also appears at The Late Enlightenment. This article includes story spoilers.

There are now actual drones in our skies, both watchers and hunter-killers. But they're (so far) only semi-autonomous, and they're on missions to protect us legally and militarily, rather than sent by fellow machines to exterminate. Thought experiments in fiction about automatic law-giving devices have been much more interesting than apocalypse porn about bad AI.

Two short stories come to mind here, one of which has enjoyed recent attention, Robert Sheckley's Watchbird (1990) and Larry Niven's Cloak of Anarchy (1972). Both these stories involve surveillance drones with some degree of autonomy and that can hurt or kill their targets. In Watchbird, the drones are police devices, intended to kill murderers before they commit their crime; the drones are able to learn on the job and once released, they expand their definition and start protecting all living things and even some machines. In the end, another drone is released to kill the first drone species, but of course it soon expands its own definition of what it should kill. (Watchbird was adapted for film here.)

The drone in Cloak of Anarchy is the copseye. In this future world, there are "free parks" where anything is allowed except violence against another human being. The floating copseyes watch over the park,a nd if violence is imminent, the copseye stuns both the aggressor and aggressee, and both wake up later, calmed down and with a hangover. Then someone finds a way to short circuit the copseyes, and within hours factions have formed inside the park and violence breaks out.

The stories present us with two different sets of concerns, based on the problems that occur. In Watchbird, the central concern is the autonomy of the devices. Their ability to learn is what allows the problem to grow, but the protagonist is preoccupied with the very fact of machines executing laws without intervening humans. On one hand this could almost seem like a reactionary position: one of the greatest inventions of modernity was nations of laws and not of men. Intervening humans with narrow self-interest executing these laws have always been the problem! (Hence this proposal for a legal programming language in which to write laws that then have to compile with previous laws.) But even without that quibble, his point is well-taken that when autonomous law-givers are able to immediately carry out their sentence and we can no longer modify them, they might become paperclip maximizers, in Less Wrong parlance: that is, a moral rule which seems universalizable has consequences that humans could not foresee when implementing it in all-powerful enforcers which can no longer be called back. The protagonist has no problem with more efficient enforcement, but the moral mutations allowed by the machines' autonomy.

To this end, naively, little mention is made of the interests of those authorizing and supporting the program. Still, Watchbird does peripherally make the point that technology allows concentrations of power in the hands of individuals in a way that distorts society. With sudden increments in enforcement power, some humans are able to apply laws with an all-pervasiveness and immediacy that had just never been possible before. Even someone with good intentions and what you would have called good values would suddenly find him or herself in a position of dictatorial authority. It's not even that power corrupts (although it does); it's that this centralization is so unnatural as to be impossible to handle with a good outcome. The best example is this exchange with the protagonist early in the bad behavior of the Watchbirds:
"One of the watchbirds went to work on a slaughterhouse man. Knocked him out."

Gelsen thought about it for a moment. Yes, the watchbirds would do that. With their new learning circuits, they had probably defined the killing of animals as murder.

"Tell the packers to mechanize their slaughtering," Gelsen said. "I never liked that business myself."

"All right," Macintyre said. He pursed his lips, then shrugged his shoulders and left.
One man is suddenly in the uncomfortable position of morally disapproving of whole industries and forcing them to change. What's more, this previously reluctant man does not seem so reluctant now.

It bears mentioning that the conclusion of the story, where the Watchbird-killers are now expanding their prey definition, is recapitulating one of the problems of a Singularity solution of building anti-AI AIs: there could conceivably be a parallel to an auto-immune reaction disease if humans fell into the definition of AIs.

At first glance the problem in Cloak of Anarchy is a curious one - that humans immediately revert to violent tribalism when the violence control mechanism is defunct - since Niven is elsewhere clearly sympathetic to libertarian concerns. The obvious interpretation of the story is the paternalistic one, that humans need authority to make them behave. But there's another interpretation, which is that the drones created the problem. That is to say, when we are coddled by perfect enforcement from drones, we lose the ability to exercise moral choices, as well as the ability to appreciate the consequences of poor choices. When it is physically impossible to harm another person, why learn restraint? Why worry about what happens when you pick a fight? When suddenly the daddies aren't around to break up the fights and bail everyone out, we shouldn't be surprised at what happens.

The watchbirds do exist today, although with less autonomy and more firepower. The changes are incremental; there won't be a red carpet unveiling of AI even as profound as the release of the watchbirds (or copseyes). They'll be to areas where there's the most pressure for advance, and the least opportunity for public awareness and understanding. It will be, and is, the addition of subroutines allowing a drone to apply the laws of war to a kill it's about to make (instead of getting slow permission from a JAG in an office in St. Louis who might be in the bathroom). It's the growth of autonomous stock trading algorithms. It will even be in advertising on porn sites.

Here's the U.S. On the Moon

By Boredboarder8 on Reddit's Map Porn. I find images of Earth structures projected onto smaller bodies much more interesting than comparing them to Jupiter or the Sun. But here I think the Great Lakes wouldn't last so long.

Also at the Late Enlightenment.

Friday, February 15, 2013

Best Video of Russian Meteor

My favorite video of the Russian meteor is this one. Then again, at fireworks displays I'm one of those weirdos who turns around to watch the shadows. (Don't laugh, once I caught a pickpocket that way.) Just think of the energy released in the light and the sonic boom. Skip to 40s if you don't want to watch Siberian traffic.

Thursday, February 14, 2013

Glenn Danzig as Comedy Trope

Sadly, only in the past few months have I watched Kids in the Hall; somehow in the early 90s when everyone else was watching it I missed it. As a comedy troupe I consider them second only to Monty Python. But I do remember seeing Brain Candy when it came out in '96 and at the time I noted that if Bruce McCulloch's character Grivo wasn't an obvious (and spot-on and hilarious) imitation of Glenn Danzig, then I don't even know what to tell you.

And it has nothing to do with metal or science fiction but this is one of my favorite Kids in the Hall pieces, but there are a million more.

Sunday, February 10, 2013

The Trooper (Igor Presnyakov rec. 2012; orig. Iron Maiden 1983)

Inquisition Symphony (rec. Apocalyptica 1998; orig. Sepultura, 1987)

Asteroid Pass This Friday - 2012 DA14

This Friday 15 February, 2012 DA14 will pass closer to the Earth than geosynchronous satellites. Per the NASA Impact Risk site, it's a zero on the Torino Scale (a combined index of probability that it will hit and damage if it does). Zero means there's no reason in wildest hell to think that it would hit us, or even if it did, it wouldn't reach the surface. It's a -5 on the Palermo Scale which is to say, 100,000 times less likely than a random background event.

If despite the careful work of modern astronomers, there's someone in your social circle who insists that this is the end the Mayans (or Christians, etc.) were talking about, ask them to give them all your worldly possession, and zany hijinx will ensue either way!