American Consequences - June 2019

computers do our work for us. If self-driving cars become common, a lot of people will prefer to rely on computer chauffeurs and not bother to learn how to drive themselves. The computers’ safety record will be so much better than humans’ that there’ll probably be bureaucrats and activists campaigning to outlaw human drivers. But there will also be people reluctant to cede all control to a computer, as well as traditionalists who still prefer driving themselves. Just as there are people who still like to bake their own bread and create their own pottery even though machines can do the job more efficiently. The ability to drive a car will not be lost forever. But what if some virus suddenly strikes all the world’s cars, causing them to careen off the road or crash into each other while their humans sit there helplessly? Or what if a computer running the world’s power grid crashes, or if some glitch sends armies of drones to bomb cities while human commanders sit there powerless to stop them? Those are the kind of nightmare scenarios that AI-phobes like to imagine, to which the best answer is: Really? We’re supposed to believe that humans are smart enough to build advanced computers but too dumb to design any safeguards or notice any vulnerabilities until it’s too late to save ourselves. In reality, we’re prone to err in the other direction — to fear new technologies so much that we cling to the old ones for too long or take unnecessary precautions. Railroads kept using brakemen and flagmen long after their functions had been automated. Some buildings still have

John Tierney is a contributing editor at City Journal and a contributing science columnist at the NewYork Times . He is the co-author, with Roy Baumeister, of Willpower: Rediscovering the Greatest Human Strength . Of course, there will always be AI glitches that we don’t anticipate, but we can always respond the way we did to the problems in the Boeing 737 MAX’s computer. It took just two crashes for humans to ground the whole fleet of planes. When AI goes bad, there’s one simple and immediate solution: Pull the plug. elevator operators. The risk of an airline being hijacked in the post-9/11 era is minuscule now that cockpit doors are locked, but federal air marshals are still riding planes. We love to imagine what could go wrong and then spend too much time and money averting it. In the late 1990s, the world’s computer networks were supposedly going to be incapacitated when the Millennium Bug flummoxed operating systems unprepared for a year ending in 00... But January 1, 2000 passed with few problems, even in the countries that spent little money preparing for it.

AI IS ITCOMINGTOGETYOU?

American Consequences

21

Made with FlippingBook - Online Brochure Maker