In a simulation, Worth Sigurdsson and House (2013) (WSH) retrodict that Europa, Callisto, Titan, and Enceladus have received 1,900, 370, 510 and 340 metric tons of material from Earth, with 3.4 billion metric tons from Earth ejected from the solar system entirely. Enceladus may be less interesting if it really did only form in the cretaceous, but the others have all been there since the start of the solar system. WSH state explicitly that they didn't try to estimate the viability of life surviving the journey, but it cannot be repeated enough that we now have evidence that living things - metazoans, in fact - can survive uncontrolled re-entry with minimal protection, as some worms that were on board the Columbia were found alive on the ground weeks later. We're now able to start putting bounds at least on local panspermia (within our own solar system), though it would be interesting to estimate the chances for gravitational capture by surrounding stars. This is exciting not only to flesh out the realism of panspermia as traditionally considered, but also the idea of very small, molecule- or cell-sized organic von Neumann probes passively spreading between lower-gravity bodies.
A very basic calculation using water surface area and the time it took for life to appear on Earth, shows that all other things being equal, there is a 1-in-3 chance of indigenous life on Europa. An experiment in reproducing impact conditions and local conditions on these moons, along with adding most-likely-transferred Earth fauna, seems that it would be fairly easy to do - a sort of Miller-Urey experiment for local panspermia.
R.J. Worth, Steinn Sigurdsson, and Christopher H. House. Seeding Life on the Moons of the Outer Planets via Lithopanspermia. Astrobiology, Dec 2013.
Sunday, March 31, 2019
Friday, March 29, 2019
Asteroid Bennu is Ejecting Dust; Also, "Alien Tech" on Bennu? (Hint: No)
One solution to the Fermi paradox is that there's evidence around us and we just haven't noticed it yet. An estimate of 20 MA for colonizing the whole Milky Way has been advanced, and if von Neumann probes (VNPs) are possible, then they should be around us. We have a tendency to think of VNPs as industrial-age metal objects like Apollo 11, but organic VNPs could diffuse between low-gravity bodies when interstellar comets or asteroids pass through solar systems. Therefore, when we start finding some interesting high molecular weight organic polymers with non-random monomer sequences in the sample-return specimens from asteroids like Bennu, we should seriously consider that this might be what we're looking at. It is therefore interesting that Bennu is ejecting material (see here and here), which would be required if the VNPs are cellular- or molecular-scale chemical replicators that spread passively.
Here ends the serious part.
Admittedly this is an extraordinary hypothesis, and it requires consistent extraordinary evidence to support it. However, noticing that asteroids have boulders on them isn't extraordinary.
Look! Boulders! And they're circled! Gosh, that MUST be alien tech!
One excitable UFO-hunting schmendrick insisted (after doing a complex analysis in Microsoft Paint, i.e. circling the boulders) that Bennu is "littered with alien tech". Then so is the construction site near my house! So is the desert! There are boulders all OVER the place - my gosh, we're surrounded by alien tech! Run! Apparently skulls have also been spotted. Space pirates? This is a particularly morbid form of pareidolia.
Here ends the serious part.
Admittedly this is an extraordinary hypothesis, and it requires consistent extraordinary evidence to support it. However, noticing that asteroids have boulders on them isn't extraordinary.
One excitable UFO-hunting schmendrick insisted (after doing a complex analysis in Microsoft Paint, i.e. circling the boulders) that Bennu is "littered with alien tech". Then so is the construction site near my house! So is the desert! There are boulders all OVER the place - my gosh, we're surrounded by alien tech! Run! Apparently skulls have also been spotted. Space pirates? This is a particularly morbid form of pareidolia.
Labels:
asteroid,
fermi paradox,
humor,
von neumann
Thursday, March 28, 2019
Security Precautions for Keyless Ignition vs Remote Entry
This is certainly the most practical post in the history of this blog, but it involves the hazards of relying on modern technology; specifically, power over reliability; one characteristic of the twenty-first century.
Some months ago I came home from vacation to find my car door open - not wide open, in fact as shut as it could be without actually being closed. Of course the battery was stone dead. I worried that this meant my electronic entry system had been hacked (otherwise, I would have to accept that I'm the kind of idiot who doesn't close my car door all the way when I'm about to leave for vacation.) Recently, another person on my street had both their cars entered without any signs of being forced, and they're quite sure the cars were both locked. Neighbors have been advising me to put my car keys in a Faraday cage when I'm out of the house.
If this is actually a problem, you can't rely on car manufacturers and dealerships telling you about it, despite it being clearly documented as a real problem that occurs in the real world. (Also, yes I do physically block cameras on my devices when I'm not using them - and yes, my wife once caught a virus which started taking photos from her camera - and yes, when James Comey was director of the FBI, he recommended covering your device cameras as well.)
For grins, I called my Honda dealership and asked the service department about this. With what sounded like a straight face, the technician told me that this has never happened. When I told him it's been documented on video, his counterargument was: no it hasn't, it's never happened. So they weren't much help. Ultimately I called a locksmith who verified what I thought must be the case about the various systems. (If you're reading this and I have anything wrong, PLEASE comment.)
The reason I'm posting this is that I had so much difficulty trying to answer the question by searching online, partly because there's no consistent terminology that I could find online. That's why I ended up having to call the (useless) dealership and quite helpful locksmith. So let's define terminology.
A remote door opener (the older original kind, which is the kind I have) requires you to press a button to send a radio signal, which unlocks the car or opens the trunk. There may be a chip in the key that is required for the car to start when you turn the ignition - but you do have to put the key in an ignition. Most of these use a rolling code that changes each time. In other words, every time you press the unlock button, the car sends back a signal saying "okay, that's the right code and I'm opening the lock - but here's the new code for next time."
To review: remote door openers only send a signal when you press the button, and you have to physically turn the ignition.
Keyless entry may require you to press a button on a key fob to open the door, or it may just automatically unlock the car when you're standing right next to it. Then, once you're in the car, there's no ignition to turn, just a button to push. The car will only start if the key fob is very close (inside the car.) In this case, the key fob is constantly sending out a signal to the car without you doing anything - certainly, in order for the car to start, and (if your locks open automatically without pushing any buttons) to open the locks as well.
To review: keyless entry constantly sends out a signal, and when you're close enough to the car, the car will allow you turn it on (and may unlock automatically.)
The "hack" is called a signal amplification relay attack (SARA), and really only is useful for keyless entry, not for remote door openers. Why? Your keyless entry key fob is designed to be very weak so your car can't detect it from more than a few feet away. But if the key is close enough to the outside wall of your house, with a special device, a criminal can detect the coded signal, amplify it, and send it to another device that's right next to the car which repeats the code. No code-breaking required; just one device for picking up the code as it's being constantly transmitted by the keyless entry key fob, sending it to a second device next to the car which repeats it. The car thinks that the key fob is right there, opens, and allows itself to be driven. (This is exactly what they did in the link above.)
You could conceivably do the same thing to a remote door opener like mine, but the bad guys would have to be sitting on your street with their devices to record the signal when you press it. The weakness of keyless entry is that it's constantly transmitting and actually lets you start the car. This is in contrast to remote door openers, which only transmit when you press the button - and even then you still need the physical key to start the car.
CONCLUSION: if you have keyless entry, then YES, you should keep your keys in a Faraday cage - especially when they're going to be sitting unused for a long period (when you're on vacation.) Most people recommend wrapping in aluminum foil or putting them in a coffee can, inside your refrigerator. (I'm only repeating what I've read and make no claim as to whether this is actually enough to defeat SARA devices.) It would be much harder to do this to one of the older remote entry keys, and I am definitely not planning to get a keyless entry key fob. The risk:benefit is obvious. The only benefit to keyless entry is literally that you don't have to press a button, and don't have to move your arm to turn a key - in exchange for exposing yourself to this security problem.
Some months ago I came home from vacation to find my car door open - not wide open, in fact as shut as it could be without actually being closed. Of course the battery was stone dead. I worried that this meant my electronic entry system had been hacked (otherwise, I would have to accept that I'm the kind of idiot who doesn't close my car door all the way when I'm about to leave for vacation.) Recently, another person on my street had both their cars entered without any signs of being forced, and they're quite sure the cars were both locked. Neighbors have been advising me to put my car keys in a Faraday cage when I'm out of the house.
If this is actually a problem, you can't rely on car manufacturers and dealerships telling you about it, despite it being clearly documented as a real problem that occurs in the real world. (Also, yes I do physically block cameras on my devices when I'm not using them - and yes, my wife once caught a virus which started taking photos from her camera - and yes, when James Comey was director of the FBI, he recommended covering your device cameras as well.)
For grins, I called my Honda dealership and asked the service department about this. With what sounded like a straight face, the technician told me that this has never happened. When I told him it's been documented on video, his counterargument was: no it hasn't, it's never happened. So they weren't much help. Ultimately I called a locksmith who verified what I thought must be the case about the various systems. (If you're reading this and I have anything wrong, PLEASE comment.)
The reason I'm posting this is that I had so much difficulty trying to answer the question by searching online, partly because there's no consistent terminology that I could find online. That's why I ended up having to call the (useless) dealership and quite helpful locksmith. So let's define terminology.
A remote door opener (the older original kind, which is the kind I have) requires you to press a button to send a radio signal, which unlocks the car or opens the trunk. There may be a chip in the key that is required for the car to start when you turn the ignition - but you do have to put the key in an ignition. Most of these use a rolling code that changes each time. In other words, every time you press the unlock button, the car sends back a signal saying "okay, that's the right code and I'm opening the lock - but here's the new code for next time."
To review: remote door openers only send a signal when you press the button, and you have to physically turn the ignition.
Keyless entry may require you to press a button on a key fob to open the door, or it may just automatically unlock the car when you're standing right next to it. Then, once you're in the car, there's no ignition to turn, just a button to push. The car will only start if the key fob is very close (inside the car.) In this case, the key fob is constantly sending out a signal to the car without you doing anything - certainly, in order for the car to start, and (if your locks open automatically without pushing any buttons) to open the locks as well.
To review: keyless entry constantly sends out a signal, and when you're close enough to the car, the car will allow you turn it on (and may unlock automatically.)
The "hack" is called a signal amplification relay attack (SARA), and really only is useful for keyless entry, not for remote door openers. Why? Your keyless entry key fob is designed to be very weak so your car can't detect it from more than a few feet away. But if the key is close enough to the outside wall of your house, with a special device, a criminal can detect the coded signal, amplify it, and send it to another device that's right next to the car which repeats the code. No code-breaking required; just one device for picking up the code as it's being constantly transmitted by the keyless entry key fob, sending it to a second device next to the car which repeats it. The car thinks that the key fob is right there, opens, and allows itself to be driven. (This is exactly what they did in the link above.)
You could conceivably do the same thing to a remote door opener like mine, but the bad guys would have to be sitting on your street with their devices to record the signal when you press it. The weakness of keyless entry is that it's constantly transmitting and actually lets you start the car. This is in contrast to remote door openers, which only transmit when you press the button - and even then you still need the physical key to start the car.
CONCLUSION: if you have keyless entry, then YES, you should keep your keys in a Faraday cage - especially when they're going to be sitting unused for a long period (when you're on vacation.) Most people recommend wrapping in aluminum foil or putting them in a coffee can, inside your refrigerator. (I'm only repeating what I've read and make no claim as to whether this is actually enough to defeat SARA devices.) It would be much harder to do this to one of the older remote entry keys, and I am definitely not planning to get a keyless entry key fob. The risk:benefit is obvious. The only benefit to keyless entry is literally that you don't have to press a button, and don't have to move your arm to turn a key - in exchange for exposing yourself to this security problem.
Friday, March 22, 2019
AIs Are Beating Experts at Old, Relevant, Real-World Problems
When I was a molecular biology undergrad in the mid 90s, the Big Question in biochemistry was protein folding - how to predict the 3D structure of a protein from its primary (linear) sequence of amino acids. Solving this problem would provide a massive boon to many fields, not least of which is drug development. But it's a hard problem: a factoid cited to impress this upon people is that if proteins folded by passing through every possible configuration randomly until they found its working (not usually lowest energy) conformation, the process would take longer than the lifespan of the universe so far. Obviously this isn't what's happening. And yet, the promise of number-crunching power being used to finally solve this always seemed just a few years away - many people, in retrospect comically, thought the problem would be solved by the close of the century.
The problem is still not solved, but scientists have made incremental progress, and now, the closest thing to a quantum leap forward. Every two years there is a contest where scientists competitively try to solve a structure, then get together and see who got closest. Mohammed Al Quraishi is a computational biologist at Harvard who writes about the 2019 meeting, and who the winner was: an AI named Alpha Fold.
This is exciting because computational biology may really be about to start paying dividends. It's scary because it's one more place where AI is starting to automate our jobs, even Harvard professors. Al Quraishi makes many interesting observations in his post, among them, that a technology company showed up and ate the protein chemists' lunch, and the reaction was muted; we're all becoming numb to this sort of thing. He tries to wake us up by asking how academic computer scientists would react if at a conference, a pharmaceutical company's scientists had showed up and beaten them at their own game.
I had previously made the informal argument that AIs would naturally exceed human performance on games, as games are clearly defined processes with discrete rules and entities, exactly the sort of thing that computers are good at. The difficulty comes when machines must interact between the information-overloaded not-fully-understood messy real world, which is why (again, I informally argue) computers are great at parsing text, and even writing text in the style of sports journalism or certain authors - but at bottom, these are all just very complicated echo chambers, with no meaning or subjective experience associated with the words and sentences. And indeed, the language manipulation has come first, but that doesn't mean encoding external experience in language is impossible. I had maybe subconsciously thought that something like protein folding, a perfect example of a messy real-world process, would be beyond the capabilities of machines at least for many years, but this theory has now been clearly falsified.
The problem is still not solved, but scientists have made incremental progress, and now, the closest thing to a quantum leap forward. Every two years there is a contest where scientists competitively try to solve a structure, then get together and see who got closest. Mohammed Al Quraishi is a computational biologist at Harvard who writes about the 2019 meeting, and who the winner was: an AI named Alpha Fold.
This is exciting because computational biology may really be about to start paying dividends. It's scary because it's one more place where AI is starting to automate our jobs, even Harvard professors. Al Quraishi makes many interesting observations in his post, among them, that a technology company showed up and ate the protein chemists' lunch, and the reaction was muted; we're all becoming numb to this sort of thing. He tries to wake us up by asking how academic computer scientists would react if at a conference, a pharmaceutical company's scientists had showed up and beaten them at their own game.
I had previously made the informal argument that AIs would naturally exceed human performance on games, as games are clearly defined processes with discrete rules and entities, exactly the sort of thing that computers are good at. The difficulty comes when machines must interact between the information-overloaded not-fully-understood messy real world, which is why (again, I informally argue) computers are great at parsing text, and even writing text in the style of sports journalism or certain authors - but at bottom, these are all just very complicated echo chambers, with no meaning or subjective experience associated with the words and sentences. And indeed, the language manipulation has come first, but that doesn't mean encoding external experience in language is impossible. I had maybe subconsciously thought that something like protein folding, a perfect example of a messy real-world process, would be beyond the capabilities of machines at least for many years, but this theory has now been clearly falsified.
Saturday, March 9, 2019
STOP METI
tl;dr if you worry about the singularity or bio-terrorism as an existential threat, you should help the effort to stop efforts to announce our presence to aliens.
This should be a priority for the rationalist and effective altruism community.
Many people are familiar with the SETI project, Searching for Extraterrestrial Intelligence. METI stands for Messaging Extraterrestrial Intelligence. Stephen Hawking, Elon Musk, and Freeman Dyson believe this is incredibly stupid and dangerous.
Of the 13 (known) attempts to deliberately signal another star so far, the earliest that any response could be received is 2036. The earliest that any response could be received from a sun-like star with planets in the habitable zone is 2085. Additional attempts are almost certain to be made in the next few years.
Most scientists recognize how important - in fact, world-shattering - contact with aliens would be, and there's actually a protocol for what scientists should do if a message is received. But these attempts are being made by individuals or small groups, with no oversight, often with really stupid justifications. One was an art project; this one invited kids to compose the message.
The argument against deliberate messaging is that even here on Earth, contact between members of the same species with differing technology was catastrophic for not just the humans on one side, but the ecosystems. A visit from aliens with enough technology to detect us or visit would therefore likely be devastating, even if they don't have malicious intentions. Once we're detected, we can never be un-detected.
The arguments for METI are laughable, and best thought of in terms of a native American on the shores of the Atlantic, talking about building signal fires to bring the Europeans over even sooner. Their best arguments are:
You can see a more thorough treatment of these arguments by a SETI expert in this paper, and the abstract finishes with these sentences: "Arguments in favor of METI are reviewed. It is concluded that METI is unwise, unscientific, potentially catastrophic, and unethical."
You can see some of these arguments being made with a straight face by METI's founder Doug Vakoch in this article. Note that METI is a splinter of SETI, since most of the scientists involved in SETI forbade active communication attempts.
Going forward I'm going to do my best to create awareness in the rationalist community and put priority on this as an existential threat alongside AI. My plan is to contact the people at SETI to see what they're already doing and how others can contribute. It seems like the best approach for now is stopping transmissions by blocking individual projects, but ideally there would actually be a law against this as well as norms that socially punish defectors. To end on an optimistic note: because there is a chokepoint (money and limited time on transmitters which are mostly controlled by universities) this problem is actually much more tractable than avoiding a "bad hard takeoff" of general AI.
Many people are familiar with the SETI project, Searching for Extraterrestrial Intelligence. METI stands for Messaging Extraterrestrial Intelligence. Stephen Hawking, Elon Musk, and Freeman Dyson believe this is incredibly stupid and dangerous.
Of the 13 (known) attempts to deliberately signal another star so far, the earliest that any response could be received is 2036. The earliest that any response could be received from a sun-like star with planets in the habitable zone is 2085. Additional attempts are almost certain to be made in the next few years.
Most scientists recognize how important - in fact, world-shattering - contact with aliens would be, and there's actually a protocol for what scientists should do if a message is received. But these attempts are being made by individuals or small groups, with no oversight, often with really stupid justifications. One was an art project; this one invited kids to compose the message.
The argument against deliberate messaging is that even here on Earth, contact between members of the same species with differing technology was catastrophic for not just the humans on one side, but the ecosystems. A visit from aliens with enough technology to detect us or visit would therefore likely be devastating, even if they don't have malicious intentions. Once we're detected, we can never be un-detected.
The arguments for METI are laughable, and best thought of in terms of a native American on the shores of the Atlantic, talking about building signal fires to bring the Europeans over even sooner. Their best arguments are:
- Aliens might already have noticed us anyway. (So why make it more likely?)
- It's extremely unlikely anyone will get the message (So why do it at all?)
- They won't come for a long time. (If we discovered a form of energy that would start poisoning our descendants in 10,000 years, would we use it?)
- If all the other aliens are remaining silent, then by doing the same, we're part of the problem, and we're hypocrites for trying to detect others (given the risk:benefit, that's a trade most of us would be comfortable making.)
- Aliens who can respond or come here will necessarily be moral beings and won't hurt us. (This one is really absurd, and not only makes assumptions about the intentions of the aliens, it sounds very much like a religious conviction.)
You can see a more thorough treatment of these arguments by a SETI expert in this paper, and the abstract finishes with these sentences: "Arguments in favor of METI are reviewed. It is concluded that METI is unwise, unscientific, potentially catastrophic, and unethical."
You can see some of these arguments being made with a straight face by METI's founder Doug Vakoch in this article. Note that METI is a splinter of SETI, since most of the scientists involved in SETI forbade active communication attempts.
Going forward I'm going to do my best to create awareness in the rationalist community and put priority on this as an existential threat alongside AI. My plan is to contact the people at SETI to see what they're already doing and how others can contribute. It seems like the best approach for now is stopping transmissions by blocking individual projects, but ideally there would actually be a law against this as well as norms that socially punish defectors. To end on an optimistic note: because there is a chokepoint (money and limited time on transmitters which are mostly controlled by universities) this problem is actually much more tractable than avoiding a "bad hard takeoff" of general AI.
Subscribe to:
Posts (Atom)