Professional February 2018

Feature insight - automation, AI and robotics

are born without in-built mental content and therefore all knowledge comes from experience or perception (i.e. nurture rather than nature) (http://bit.ly/1TxAkr9). Late in 2017, DeepMind’s AlphaZero programme, the successor to AlphaGo Zero, was applied to the games of chess, shogi and Go, without any additional domain knowledge except the rules of the game, demonstrating that a general- purpose reinforcement learning algorithm can achieve, tabula rasa, superhuman performance across many challenging domains. The AlphaZero algorithm self- played 44,000,000 games of chess, 24,000,000 games of shogi, and 21,000,000 games of Go. In his book, Thinking, fast and slow , Daniel Kahneman observes that an “expert [chess] player can understand a complex position at a glance, but it takes years to develop that level of ability…10,000 hours of dedicated practice (about six years of playing chess five hours a day) are required to attain the highest levels of performance”. AlphaZero learnt to play expert chess in just four hours, and in so doing discovered by itself the standard chess opening ideas and variations that have taken humans more than 100 years to develop. AlphaZero’s algorithm enabled it to succeed against Stockfish (the world champion of chess engines) and Elmo (a shogi-playing programme) even though it searches (evaluates) far fewer positions per second: 80,000 in chess and 40,000 in shogi, compared to 70,000,000 for Stockfish and 35,000,000 for Elmo. AlphaZero compensates for the smaller number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more ‘human-like’ approach, which might imply that it has been programmed to work (think) this way. Though deep neural network models are currently the most successful machine- learning technique for solving a variety of tasks – such as language translation, image classification/generation – a weakness

has no ability to understand their meaning. In his ‘Minds, brains, and programs’ (published in Behavioral and brain sciences , 1980), Searle set out the following thought- experiment now generally known as the ‘Chinese room argument’: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing test for understanding Chinese but he does not understand a word of Chinese.” Searle argues that the thought-experiment emphasises that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics, and that the notion that human minds are computer-like computational or information processing systems is refuted. Instead, ‘minds’ must result from biological processes; computers can at best simulate these biological processes. The Chinese room argument is probably the most widely discussed philosophical argument in cognitive science to appear since the Turing test. It has implications for semantics, philosophy of language and mind, theories of consciousness, and computer and cognitive sciences. And, of course, there are several counter arguments, not least being ‘The systems reply’ which essentially is that the man in the room is a part, a central processing unit, in a larger system that does understand Chinese. Other arguments against and in support of Searle’s thought-experiment are explained in the article ‘The Chinese room experiment’ published in Stanford Encyclopedia of Philosophy (http://stanford.io/2AMrNyC). The article concludes: “The many issues raised by the Chinese room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the biological basis of consciousness.” Impact on jobs Setting aside the philosophical issues, AI has the potential to transform work, with some

of these models is that, unlike humans, they are unable to learn multiple tasks sequentially. In the paper Overcoming catastrophic forgetting in neural networks – published in early 2017 by Proceedings of the National Academy of Sciences of the United States of America (http://bit.ly/2hVVYYX) – a team at GoogleMind revealed that it had developed a practical solution, a programme that can learn one task after another using skills it acquires on the way. James Kirkpatrick, at DeepMind, observed, that “If we’re going to have computer programmes that are more intelligent and more useful, then they will have to have this ability to learn sequentially.” Humans can naturally remember old skills and apply them to new tasks, but creating this ability in computers is proving challenging, as AI neural networks learn to play games – such as chess, Go or poker – through trial and error. Once trained, the neural network can only learn another game by overwriting its existing game-playing skill – and thereby suffer ‘catastrophic forgetting’. The DeepMind team let the programme play ten classic Atari games in random order, and found that after several days on each it was as good as a human at seven of them. Though it had learned to play different games, the programme had not mastered any one as well as a dedicated programme would have. Kirkpatrick commented that “we haven’t shown that [the programme] learns them better because it learns them sequentially. There’s still room for improvement.” Thinking, comprehension Some people are uncomfortable with the concept that machines think (and learn), with some dismissing the notion or expressing concerns about potential future developments. Philosopher John Searle has argued that IBM’s Watson cannot actually think, claiming that like other computational machines it is capable only of manipulating symbols, and ...using its deep neural network to focus much more selectively on the most promising variations...

39

| Professional in Payroll, Pensions and Reward |

Issue 37 | February 2018

Made with FlippingBook flipbook maker