Professional February 2018

Feature insight - automation, AI and robotics

The Turing test In 1950, when working at the University of Manchester, Turing introduced a test in his paper Computing machinery and intelligence , to consider whether machines can think. The so-called Turing test – which tests a machine’s ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human – is an important concept in the philosophy of AI. Turing proposed that a human evaluator would judge natural language conversations conducted via a text-only channel (e.g. keyboard and screen) between a human and a machine that was designed to generate human-like responses. The machine will have passed the test if the evaluator cannot reliably distinguish the machine. Though the Turing test has been proposed as a measure of a machine’s ability to think or its intelligence, it has been criticised by philosophers and computer scientists. Indeed, some AI researchers have questioned the relevance of the test, arguing that trying to pass it is a distraction. A ‘reverse-Turing test’ places the challenge on the machine/computer to determine whether it is interacting with a human or another computer. Such a test is now widely used to prevent machines gaining access to or interaction with content on websites i.e. CAPTCHA (completely automated public Turing test to tell computers and humans apart). Solving a CAPTCHA usually requires entering a set of characters or selecting a set of images. Text-based CAPTCHAs require use of three separate abilities: invariant recognition, segmentation, and parsing, thereby making the task difficult if all are demanded. Even in isolation, each of these pose a significant challenge for a computer. It is argued that CAPTCHAs serve as a benchmark task for AI technologies. If an AI could accurately solve the CAPTCHA without exploiting inherent design flaws, the problem of developing an AI capable of complex object recognition in scenes would thereby have been resolved. (http:// bit.ly/1YzJ81L) Mind games, and algorithms It is remarkable that efforts to create AI that can think and learn have focussed on the game of chess. In early December 2017, a DeepMind team published the paper Mastering chess

and shogi by self-play with a general reinforcement learning algorithm (http:// bit.ly/2AT7tLT). It notes that “The game of chess is the most widely-studied domain in the history of artificial intelligence” and that “The study of computer chess is as old as computer science itself”. ...exhibit intelligent behaviour equivalent to or indistinguishable from that of a human... Champernowne, began writing a chess programme for a computer which at that time did not exist. In 1952, Turing attempted to implement it on the world’s first commercially available general-purpose electronic computer: a Ferranti Mark 1 (which was also known as the Manchester Electronic Computer) (http://bit.ly/2i3Isss). Lacking sufficient computing power, the programme could not be executed; so, instead, Turin acted as the ‘computer’ in a game against his colleague Alick Glennie, ‘running’ the programme by flipping through the pages of the algorithm taking about half an hour per move and carrying out the instructions at the chessboard. According to former world chess champion, Garry Kasparov, the programme “played a recognisable game of chess”. In 1997, IBM’s chess playing computer, Deep Blue, succeeded in defeating Kasparov. Some, however, argued that it only used brute force methods not real intelligence. After Deep Blue’s victory over Kasparov, IBM looked for a new challenge. In 2004, IBM’s research manager Charles Lick proposed IBM develop a system to compete in the TV game show Jeopardy! In 2011, IBM’s Watson computer system – which was developed by a research team in IBM’s DeepQA project and named after the company’s first chief executive officer Thomas J. Watson – competed on the gameshow against two former winners, winning the first prize of $1 million (which was donated to charity). To compete, In 1948, Turing and his former undergraduate colleague, David

Watson – which had to answer questions posed in natural language – was able to access 200,000,000 pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia ; however, it was not connected to the Internet during the game. During the planning of the competition conflicts arose between IBM and the Jeopardy! team. IBM’s concern that the show’s writers would exploit Watson’s cognitive deficiencies when writing the clues, thereby turning the game into a Turing test, were resolved by agreement that a third party would randomly pick the clues from previously written shows that were never broadcast. Though IBM agreed to the request from the show’s team that Watson physically press a button, the robot operated the buzzer faster than its human competitors. Despite consistently outperforming the human opponents, Watson had trouble in a few categories, notably those having short clues containing only a few words. (http://bit.ly/2Bp17k4) In early 2017, an AI called Libratus beat four of the world’s best poker players in a twenty-day poker tournament. In addition to working with imperfect information (i.e. not all the cards are ‘visible’), Libratus had to bluff and interpret misleading information to win. Tuomas Sandholm, professor of computer science at Carnegie Mellon University, who, along with PhD student Noam Brown, built Libratus, said “We didn’t tell Libratus how to play poker. We gave it the rules…and said ‘learn on your own”. Over the course of playing trillions of hands, Libratus refined its approach and arrived at a winning strategy. Daily, after play ended, Brown connected Libratus to the Pittsburgh Supercomputer Center’s Bridges computer to run algorithms to improve its strategy, and on the following morning would spend two hours getting the enhanced AI up and running for the next round. In early 2017, Google’s DeepMind AlphaGo Zero programme, which uses a combination of deep neural networks and a search technique, defeated Ke Jie, China’s world champion at Go. An average 200 moves are possible at each turn of Go, which means that searching all the possible moves and outcomes would take an enormous amount of computing power that, some say, may be impossible. The game of Go is largely about patterns rather than a set of logical rules.

37

| Professional in Payroll, Pensions and Reward |

Issue 37 | February 2018

Made with FlippingBook flipbook maker