Semantron 23 Summer 2023

Artificial intelligence as intelligent

networks attempt to mimic the human brain, through a combination of data inputs, weights, and bias. 4 Consisting of multiple layers of interconnected nodes, each building on the previous layer to refine and optimize the prediction, deep learning allows for unsupervised learning. Unlike machine learning, which requires pre-labelled datasets, deep learning can categorize data itself using common characteristics. Thanks to this, Artificial Intelligence using the deep learning subset seems to be ‘intelligent’: it acquires new knowledge, applies what has been learnt, and displays some autonomy. However, despite fulfilling the base requirements set for intelligence, many experts consider deep learning to be less than intelligent. While the sophisticated neural networks used in deep learning are useful tools for completing a specific function (such as powering driverless cars), they are unable to perform tasks outside this function. In addition, these algorithms struggle with grasping ideas which cannot be categorized easily, or that can be considered abstract. As cognitive psychologist Gary Marcus wrote, current deep learning technology is ‘likely to face challenges in acquiring abstract ideas’ and has ‘no obvious ways of acquiring logical inferences’. 5 Ultimately, all AI can be considered simply highly trained algorithms, lacking any understanding of the purpose of the algorithm, or any concepts beyond its designated tasks, only executing specific code dedicated to said tasks. This is a type of AI known as weak AI, which all current AI falls under. Weak AI performs a single task by simulating human behaviour; it is a very efficient, specialized tool. In short, weak AI acts intelligently, but is actually narrow in scope, and does not display genuine intelligence. As a result, none of our current AI can be considered truly intelligent. But can AI become intelligent in the future? AI with general intelligence (human level intelligence) is referred to as strong AI. The key distinction between behaving intelligently and thinking intelligently is consciousness. Before machines can think and exhibit general intelligence, they must have consciousness. Consciousness allows them to think, reason, and understand the instructions they are performing. So, the question then becomes, is it possible for consciousness to be artificially created? Alan Turing famously explored a similar question in 1950. Turing believed that there had to be a practical method for detecting artificial intelligence and devised the Turing Test (also known as the Imitation Game). 6 Turing’s test inv olved two subjects; one human and the other machine, along with a human interrogator. The interrogator is in a room separated from the other person and the machine and knows the human and the machine by the labels ‘X’ and ‘Y’. The objective of the game is to ask questions to determine which individual is human, and each individual must answer any questions put to them. Turing claimed that the machine can be considered ‘thinking’ if the interrogator cannot tell which individual is the human. Unfortunately, this test has a number of flaws in regard to defining intelligence and consciousness. The simplest counter argument is the Chinese Room Argument, published in 1980 by John Searle. 7 Imagine a person, alone in a room, following a computer programme for responding to Chinese characters slipped under the door. This individual understands nothing of Chinese, but by following the programme for manipulating symbols just as a computer does, he sends appropriate strings of Chinese

4 Education 2020, 5 Marcus 2012.

6 Oppy 2021. 7 Cole 2020.

45

Made with FlippingBook - Online catalogs