Can artificial intelligence ever be considered truly intelligent?
Yusuf Hassan
In the 21 st century, artificial intelligence (AI) is prevalent in our daily lives. It has become a useful tool for all kinds of projects, ranging from essay-writing websites, to diagnostic tools for cancer. It is even used in our justice system, tackling crime rates with predictive policing algorithms. However, is it correct to call this technology intelligent? The answer is not simple. While AI has made countless developments in five decades of research, we have yet to see any algorithms rival the human brain. Despite this, there are some areas of research which show promise, with AI displaying the ability to acquire and apply knowledge, as well as with the capacity to learn. But is this intelligence? The overall goal in the field of AI is the development of a machine capable of acting in an ‘ intelligent manner ’ . Psychologist Robert J. Sternberg gave the following definition of intelligence: ‘ Intelligence is the ability to learn from experience and to adapt to, shape, and select environments. ’ 1 Most definitions agree that for an entity to be considered ‘ intelligent ’ , it should be able to learn from, react to, and solve the problems that it encounters. It could be argued that, thanks to the sub-sets of AI, these conditions have already been met. While there are many sub-sets, the most important one for testing intelligence is machine learning. AI is theoretically capable of learning and applying its knowledge using machine learning – where a computer is ‘ trained ’ using algorithms to recognize patterns, for example, identifying a car in an image. However, this is an imperfect system, as the algorithm is dependent on the inputted data, and is therefore vulnerable to any bias, implicit or otherwise, present in the dataset. According to Heinrich Jang, a research scientist at Google: ‘ Datasets [used in machine learning] often contain biases which unfairly disadvantage certain groups .’ 2 A common example used is facial recognition algorithms, which recent research has shown displays divergent error rates across demographic groups, with the poorest accuracy consistently found in individuals who are black, female, and 18-30 years old. Generally, the algorithms performed worst on darker-skinned individuals, implying that some level of implicit racial bias is present, 3 probably caused by the lack of representation of such individuals in datasets. While the trained knowledge can be stored and applied again to other datasets, it is limited to a very small set of similar unseen problems. If I were to rotate the image of the car 90 degrees, or give an image of a different car, then the algorithm would have no idea if a car was present or not. Overall, while the machine-learning process does incorporate both learning and application of what it has learnt, it lacks self-sufficiency. It must be meticulously trained by humans, with the datasets used being carefully chosen, and therefore cannot be considered a sign of intelligence. However, a sub-set of machine learning exists called deep learning which could prove a solution to this issue. Deep learning neural
1 Sternberg 2012: 19. 2 Jiang 2020: 702-712. 3 Anonymous 2022: 1.
44
Made with FlippingBook - Online catalogs