Artificial Intelligence (AI) and the future of Medicine: ethical, technical and patient care considerations
Zaki Kabir
What is artificial intelligence?
Artificial intelligence (AI) was a term first coined by John McCarthy in 1956 when he held the first academic conference on the subject. However, the definition of artificial intelligence has never reached anything like a consensus over the years. Computer scientist Larry Tesler went so far as to say ‘Artificial intelligence is whatever hasn’t been done yet’. That b eing said, a broad definition has been defined, as described by Dr Tom Day. He states that AI is a machine or computer that can mimic ‘cognitive’ functions (Day, 2022). Due to the breadth of any potential definition of AI, other terms are used to
(Figure 1 - The relationship between artificial intelligence, machine learning and deep learning, with examples of machine and deep learning -Day, 2020)
delineate between the different technologies. Fig. 1 diagrammatically demonstrates the interactions between the different terminology. Machine learning (ML) was coined by Arthur Samuel, a computer scientist working at IBM, and is defined as ‘the use of computer pro grams that automatically improve with experience and over time become more successful in their defined task. ’ Deep learning (DL) is a specific type of ML that uses neural networks arranged into many layers (typically more than five, up to many hundreds). Each layer can extract abstract and high-level features from the input data, allowing complex interpretation and prediction from the supplied data, for example, image classification in the field of computer vision (Day, 2020). Computer vision is a subset of DL.
In the past ten years, DL and computer vision have seen a huge surge in academic interest and research, as a part of what is known as ‘The AI explosion’.
This ‘AI explosion’ is the result of three main factors:
1. The aforementioned development of multi-layered neural networks or deep learning. 2. The exponential increase in computing power and large-scale data collection establishment. Traditionally, computers used CPUs (Central Processing Units), which could only carry out one calculation at a time. However, the development and integration of the GPU (Graphical Processing Unit) has enabled an increase in computing power. GPUs can carry out multiple calculations at any one time, making them far more powerful than their predecessors. 3. The advent of social media, search engines, CAPTCHA and even simple things like online shopping (where personal preferences are catalogued) have given technology companies access to a huge amount of labelled data, which allows them to build neural networks for a vast array of purposes.
32
Made with FlippingBook - PDF hosting