of the workforce that will be replaced by AI,” Fischer said. That’s why a big part of the ethical discussion needs to be focused on training and retraining a workforce that will inevitably lose jobs. Bridging this gap certainly comes with challenges. Fortunately, the tech industry has been proactive in addressing disruption. Google, most notably, has been hosting events on AI for the past few years. In 2018, California state and local leaders were invited to the campus to discuss how inevitable disruption within the workforce will impact citizens in the short and long term. For its part, the tech industry, which is already facing
a shortage of talent, is advocating hard for training and retraining people who risk being replaced. Many of the top tech companies like IBM, Microsoft and Facebook have also established ethics boards, while innovators such as Elon Musk have championed for strict regulations. At SXSW last year, Musk famously warned that unchecked, AI is potentially more dangerous than nuclear weapons. Fear, Loathing and Regulation Two areas that tend to draw the most ire from critics have to do with militarization and law enforcement. Drones that use AI, for example, have already been used to bomb opponents during war, while the police department in Dallas came under scrutiny several years ago after it used an AI-enabled drone to deliver explosives. Meanwhile, the U.S. Air Force is already experimenting with pilotless planes intended for battle, while federal and local investigators are using facial recognition to track and identify potential suspects—all thanks to advances in AI. And it’s only the beginning. “We’re moving into an age where all of your movement is tracked through geo- location and all the people you’re talking to on social media are tracked not by humans, but by algorithms,” Fischer said.
“There are many areas of the workforce that will be replaced by AI. That’s why a big part of the ethical discussion needs to be focused on training and retraining a workforce that will inevitably lose jobs. Bridging this gap certainly comes with challenges." Marc Fischer, CEO and Co-founder, Dogtown Media
These ethical debates have prompted the EU to establish “Ethics Guidelines for Trustworthy AI,” something that many U.S.-based companies such as IBM are eagerly embracing. According to Francesca Rossi, IBM’s AI ethics global leader, “The guidelines recognize that there is no ‘one-size-fits-all’ solution.” Rossi is one of many tech leaders from around the world who has been working with the European Commission to develop ethics, policy and investment recommendations around AI. Two major components of “Trustworthy AI,” are to “respect fundamental rights, applicable regulation, and core principles and values, ensuring an ‘ethical purpose,’” and, “It should be technically robust and reliable since, even with good
30
CompTIAWorld | FALL 2019
Made with FlippingBook Online document