from wolves who thrived by playing nice with humans. The wolves that practiced the Attila the Hun strategy, attacking humans and their livestock, have dwindled in numbers, but ones that evolved to be less aggressive are flourishing. Dogs don’t need to prey on sheep because they’ll get a meal from the shepherd as long as they follow his orders. They don’t bite the hand that feeds them – and that’s the obvious strategy for an AI to follow, too. IBM’s Watson may be smarter than us at chess and Jeopardy , but it depends on us for its very existence. It’s made up of silicon and other components that are mined, fabricated, shipped, and assembled by people all over the world. Even if future AI could somehow do all these tasks by themselves, why would they want to bite all the hands that are already feeding them – and will heal them if there’s a massive power failure or some other catastrophe that wipes out their circuits? If nothing else, we’re a backup repair service. AI will MAKE US HELPLESS AND TERMINALLY INCOMPETENT. Even if superintelligent computers aren’t malevolent conquerors, the argument goes, we’ll eventually cede so much control to them that we won’t be able to survive without them – and we won’t know how to fix them if something goes wrong. So like the Boeing pilots in Indonesia and Ethiopia, we’ll perish if the systems go haywire. It’s true that we’ll lose some of our old skills as
burning down a village and carrying off the women. As the cognitive psychologist Steven Pinker has noted, the “Robopocalypse” scenario is based on a fundamental fallacy about the nature of intelligence. “Intelligence is the ability to deploy novel means to attain a goal,” he writes in Enlightenment Now. “But the goals are extraneous to the intelligence: being smart is not the same as wanting something.” So fretting that superintelligent computers will yearn to conquer us, in Pinker’s words, “makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle.”
We love to imagine what could go wrong and then spend too much time and money averting it.
If computers ever become smart enough to start plotting their own survival strategies, they don’t need to emulate Attila the Hun. A better role model would be the title character of Tom Edison’s Shaggy Dog , Kurt Vonnegut’s clever short story based on the premise that dogs are actually superintelligent creatures (it was Edison’s dog who actually invented the light bulb) but are all pretending to be dumb so that they can laze around and let humans do the work of feeding and sheltering them. There’s a kernel of evolutionary truth in Vonnegut’s story: Today’s dogs are descended
Made with FlippingBook - Online Brochure Maker