via Cartoon Machine
‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’
‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.’"
Omens by Ross Anderson at Aeon Magazine
"In 10 years,” Kohno told me, “computers will be everywhere we look, and they’ll all have wireless. Will you be able to compromise someone’s insulin pump through their car? Will you be able to induce seizures by subverting their house lights? Will you be able to run these exploits by cell phone? What’s possible? It’s more like ‘What won’t be possible?’"
Look Out—He’s Got a Phone! by Charles C. Mann at VF
"Once, Ketchum walked into his office and found a barrel the size of an oil drum standing in a corner. No one explained why it was in his office, or who had put it there. After a couple of days, he waited until evening and opened it. Inside, he found dozens of small glass vials, each containing a precisely measured amount of pure LSD; he figured there was enough to make several hundred million people go bonkers—and later calculated the street value of the barrel to be roughly a billion dollars. At the end of the week, the barrel vanished just as mysteriously as it had appeared. No one spoke about it. He never learned what it was for."
Operation Delirium by Raffi Khatchadourian from the Dec. 17, 2012 New Yorker
"Taleb has no use for the “charlatanic” field, comparing economic research to medieval medicine. Economists are, in his estimation, weak, ignorant, fearful, and generally pathetic. At one point he fantasizes about beating up an economist in public."
This Is Not a Profile of Nassim Taleb (might be gated) at the Chronicle of Higher Education.
The comments on this Kevin Drum post must be some sort of OWS meta-commentary on derpitude.