Can artificial intelligence be intelligent?
patterns with which it can then associate outputs based off its calibrated inputs (C 2018). This training knowledge can be stored and can be applied again, but only to a limited set of very similar unseen problems. Although this process incorporates learning and applying what it has learned, it lacks self-sufficiency as it must be exogenously trained by humans. However, a sub-branch of Machine Learning called Deep Learning allows for unsupervised training (Brownlee 2016). Deep Learning trains itself through layers of algorithms in an Artificial Neutral Network, calibrating itself to a large volume of data. This allows a degree of calibrated supervision of the learning process and the AI is continuously improving its internal state. On the one hand, Deep Learning sub-sets seem to constitute intelligence: there is an acquisition of new knowledge, an application of what has been learnt to its algorithms and a degree of autonomy. However, Cambridge Professor Jon Crowford argues in his blog that deep machine learning is applied data science not AI and ‘ deep learning still isn't intelligent, though it sure is artificial ’ (Crowford 2018). Sophisticated calibrated neural nets that can identify emotions in crowds, or that power driverless cars, are useful tools, but neither one could perform the other’s job. Ultimately, AI are just trained algorithms and they execute specific code dedicated to their specific task without knowing what the purpose of the algorithm they are performing, why they are executing it nor that they are executing code. In this way they lack understanding and are behaving intelligently but are not thinking intelligently, or thinking, or even being conscious. Any current AI, whether it counts ants or beats humans at Go, are all known as Weak AI. Weak AI perform a single task by simulating human cognition; they are highly efficient, specialized tools (Nicholson n.d.). The key distinction between behaving intelligently and thinking intelligently is consciousness. Before machines can think and become genuinely intelligent, they must have consciousness. Consciousness is required to think, reason and understand not just perform instructions. In short, Weak AI acts intelligently but is actually narrow in scope and until AI becomes conscious it will not exhibit genuine Intelligence. But can consciousness be engineered? When Alan Turing approached the question ‘ can machines think? ’ , he theorized a machine that consisted of an infinite strip of 1s and 0s with a reading head that can read or write a 1 or 0 to a single bit square which could bemanipulated left or right (Mullins 2012). All computers and computer software can be described in this framework sowhether you are running word on a desktop orminesweeper on aMacBook, you are fundamentally executing a number of algorithms on a so-called Turing Machine. Turing thought there might be a practical necessary test for detecting artificial intelligence (Bansal n.d.). Turing’s Test involved two subjects, one human and the other a Turing Machine as well as a human observer. The observer can ask as many questions as it likes, and the subjects would answer via an anonymous text-based interface. When the test ended the observer would pick a subject it thought was human. The principle being that, if the machine could
73
Made with FlippingBook - Online catalogs