Semantron 20 Summer 2020

The ethics of artificial intelligence

There is a plethora of approaches towards Moral AI (as it is known). Wallach and Allen, for instance, make an eloquent and forceful case that we should seriously consider granting machines moral decision-making power in their book, Moral Machine , teaching robots right from wrong. Their argument is that machines are deployed in situations in which they make decisions that have a moral impact. Hence, we should endow themwith moral sensitivity to the moral dimensions of the situations in which the increasingly autonomous machines will inevitably find themselves. The machines they refer to may be anything from software, softbots to robots, and combinations of these. Through interconnected and open systems, situations might arise that are neither desirable nor foreseeable when the systems were designed. Whether we can build such systems is still an open question. If we were to engineer artificially moral systems, would they count as truly moral systems? Wallach and Allen conclude by noting that human and artificial morality will be different, but that there is no reason a priori to rule out the notion of artificial morality. Moreover, they argue that the very attempt to construct artificial morality will prove worthwhile for all involved. The most general framework for building machines that can reason ethically consists in bestowing the machines with a moral code. This requires that the formal framework used for reasoning by the machine be sensitive enough to receive such codes. The field of Moral AI, for now, is not concerned with the source or provenance of such codes. The source could be humans, and the machine could receive the code directly (via explicit encoding) or indirectly (reading). Another possibility is that the code is inferred by the machine from a more basic set of laws. We assume that the robot has access to some such code, and we then try to engineer the robot to follow that code under all circumstances while making sure that the moral code and its representation do not lead to unintended consequences. Deontic logics are a class of formal logics that have been studied the most for this purpose. Abstractly, such logics are concerned mainly with what follows from a given moral code. Engineering then studies the match of a given deontic logic to a moral code (i.e., is the logic expressive enough?) which has to be balanced with the ease of automation. Bringsjord et al. (2006) provide a blueprint for using deontic logics to build systems that can perform actions in accordance with a moral code. The role deontic logics play in the framework offered by Bringsjord et al. can be best understood as striving towards Leibniz’s dream of a universal moral calculus. Deontic logic-based frameworks can also be used in a fashion that is analogous to moral self-reflection. In this mode, logic- based verification of the robot’s internal modules can do before the robot ventures out into the real world. Govindarajulu and Brinsford present an approach, drawing from formal- program verification, in which a deontic logic-based system could be used to verify that a robot acts in a certain ethically sanctioned manner under certain conditions. Since formal-verification approaches can be used to assert statements about an infinite number of situations and conditions, such approaches might be preferred to have the robot roam around in an ethically-charged test environment and make a finite set of decisions that are then judged for their ethical correctness. Personally, I find the view of American philosopher Daniel Dennett to be the most interesting and relevant to the ongoing debate about AI. He claims that philosophy and AI are not just separate things that have certain parts bound to one and other, but that AI is philosophy. Dennett says exactly this: ‘I want to claim that AI is better viewed as sharing with traditional epistemology the status of being a

80

Made with FlippingBook - Online catalogs