Should AI systems be given rights?
Taylor Lai
Rights are commonly understood as moral or legal entitlements to have or do something. Broadly speaking, rights are sets of rules that permit or forbid someone doing something, towards someone or something. Some examples of rights include human rights (a broad term that encompasses rights including freedom of speech, freedom of movement and right to life), animal rights, and natural rights. The question of whether AI systems should be given rights is complex. However, when AI systems evolve to become more powerful in the future, as they most certainly will, this question may become redundant. Humans have rights because we are conscious, sentient, and have the capacity to suffer, feel pain, and be aware of it. Rights are set out by humans to protect us from pain – the human rights system was created to prevent human suffering (Andorno and Baffone 2014: 183). Generally, the purpose of human rights is to ensure that people can live a life worthy of a human being. Human rights should be applied fairly and equally to every human, regardless of their race, ethnicity, sex, etc. There are also arguments for giving certain types of animals a range of rights, such as the right not to be hunted or killed for food (BBC 2014). In most western countries, animals such as horses, dogs and cats enjoy these rights. The fact that human beings confer rights to some animals is an illustration of our ability and willingness to confer rights to non-human entities.
How about AI systems? Should humans confer rights to AI systems as they do with certain animals? Does this depend on the intrinsic characteristics of AI systems or other factors? The following is an exploration of these topics.
Currently a lot of discussion of the ethics of AI is focussed on what AI should do (Gunkel 2018a: 87), and whether they can be moral agents. If AI can be held morally responsible for their actions, we must also grant AI systems the necessary legal rights in a court of law. For example, if an AI system hurts a human and is prosecuted, the AI system must have the right to a fair trial. This therefore means that the AI system must have the right to defend itself, the right to a lawyer, and be considered innocent until proven guilty. If AI as moral agents should enjoy some rights, the question becomes whether AI systems should be considered moral agents in the first place. While AI systems can take autonomous actions, Hakli and Makela (2019: 269) argue that moral agency depends on both autonomy and authenticity. If an agent has all the ‘ attitudes, values and capacities ’ that a moral agent has, but did not acquire them by themselves, this problem of ‘ authenticity ’ means they cannot be considered moral agents and be held morally responsible. Robots are designed to have certain characteristics, and no matter how human-like they are, they should not be held morally responsible. Following this line of argument, if an AI system cannot be considered a moral agent, there will be little need to give AI systems rights. However, this may change if AI systems attain artificial general intelligence and modify their own characteristics by themselves. Once this happens, and AI systems can take autonomous actions and attain values by themselves, would AI systems be considered moral agents?
222
Made with FlippingBook flipbook maker