The ethics of artificial intelligence
Gabriel Welham
Artificial intelligence is the field devoted to building artificial animals and people or at least artificial creatures that ‘appear’ to be people. Owing to the already abundantly clear moral problems with this practice, it is of considerable interest to many philosophers. This has clearly been shown by the numerous attempts bymany philosophers to show that this goal is in fact unattainable. However, many of the core formalisms and techniques used in AI are still frequently used in modern day philosophy. For example, intentional logics suitable for the modelling of doxastic attitudes and deontic reasoning; inductive logic, probability theory, and probabilistic reasoning; practical reasoning and planning, and so on. Considering this, some philosophers conduct AI research and development as philosophy. Officially artificial intelligence started in 1956. It was launched by DARPA, in Hanover New Hampshire. It was hosted by JohnMcCarthy and MarvinMinsky. In this historic conference, McCarthy, imagining an extravagant collaborative effort, brought together top researchers from numerous fields for an open-ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expecta tions; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone wholeheartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be underestimated, as it catalysed the next twenty years of AI research. Despite the term ‘artificial intelligence’ being coined in 1956 at the conference, it was in operation before then. For example, in his ‘mind’ paper of 1950 Alan Turing asks the question: ‘can a mac hine think?’ Here Turing is talking about standard computing machines: machines capable of computing functions from the natural numbers. However, a more appropriate question should be: ‘ can a machine be linguistically indistinguishable from a human? ’ Specifically, he proposes a test, the ‘ Turing Test ‘ (TT as it’s now known). In the TT, a woman and a computer are sequestered in sealed rooms, and a human judge, in the dark as to which of the two rooms contains which contestant, asks questions by email (actually, by teletype, to use the original term) of the two. If, on the strength of returned answers, the judge can do no better than 50/50 when delivering a verdict as to which room houses which player, we say that the computer in question has passed the TT. Passing in this sense operationalizes linguistic indistinguishability. Another important aspect to the philosophy that is related to AI is computer ethics. Computer ethics originated in the 1940s and still bear s much relevance in today’s society, as this field is all about the understanding of how one would act in situations involving computer technology ( the ‘one’ here is a human being). However, computer ethics is not to be confused with robot ethics, where one is confronted with such prospects as robots being able to make autonomous and difficult decisions, including decisions that in some scenarios may not be morally permissible. If one could discover a way to engineer a robot with an aptitude for intellectual ethical reasoning and decision-making, one would also be engaging in philosophical artificial intelligence.
79
Made with FlippingBook - Online catalogs