Semantron 25 Summer 2025

AI and rights

On the other hand, Bryson (2018) argues against giving AI rights, as AI is merely a tool, an artefact, created and maintained by humans. As such, humans can choose not to make AI a moral subject. It is desirable for humans not to build AI systems as moral agents or moral patients that do not desire more rights and do not suffer, so as not to disrupt the current social and legal framework based on human ethics (Bryson 2018). This will also be challenged if AI develops and evolves beyond humans’ initial intentions. Can we assume humans will have full control over the characteristics of AI systems? Will AI systems evolve to a stage that they start demanding rights? Even with AI systems at their current state, there is a problem with analysing whether AI should be given rights. This partly lies with ‘ terminological complications ’ or the ambiguity of terms, like consciousness, sentience, and pain (Gunkel 2018a: 92). No scientist, philosopher, or AI researcher can provide an explanation into what these properties are, let alone implement them into an AI system. Although dictionaries can give us a rough description of consciousness, ‘ the faculty or capacity from which awareness of thought, feeling, and volition and of the external world arises ’ (Oxford English Dictionary 2024), we have limited understanding of what it is. Meanwhile, Bostrom and Yudkowsky (2014) outline two criteria – sentience and sapience – as fundamental to whether AI systems have moral status (i.e. whether they should be given rights). Sentience is the capacity for qualia, such as the capacity to feel pain. Sapience is the capacity (e.g. self-awareness, responsiveness) that higher intelligence is expected to have. However, even evaluating whether AI systems have sentience and sapience is difficult. At a high level, if an AI were programmed to have human emotions and feelings, we may be inclined to grant AI basic rights; but what are those emotions and feelings? Some robots already have a basic sense of subjective good as they are programmed to take actions that avoid negative sensory experiences, so functionally they are taking the same courses of action as humans avoiding pain (Marx and Tiefensee 2015: 85). Does it make these robots sentient? Humans are not able to fully understand fellow humans’ internal states of mind, and therefore cannot fully understand others’ pain and suffering – would humans observing robots taking actions to avoid pain, or robots that are programmed to show reactions similar to human reactions when in distress, make humans believe robots can feel pain? Further, even before the advancements of AI in the past decade, social robots that are specifically designed to look like a human body and with autonomous behaviour, invoking emotional responses and build attachments with humans, have already led to discussions of whether social robots should be given legal rights (Darling 2012). This is separate from the question of whether robots are sentient, and more linked to how humans interact with robots. It is along this line of thinking that Coeckelbergh (2018) promotes an approach that focusses on relations between humans and robots (a ‘ phenomenological ’ approach). In other words, the question on 'should I give rights to robots' is closely linked to a person's social relations with robots, or whether robots are part of a moral community with humans (Coeckelbergh 2018: 149). Another way to present this is to invert the order of ontology (the study of what something is) preceding ethics (whether or not we should treat it ethically by giving it rights) – if ethics precedes ontology, ‘ moral consideration ’ is not ‘ intrinsic ’ anymore (the entities’ characteristics do not decide whether or not the entity should have rights), but ‘ extrinsic ’ (the rights

223

Made with FlippingBook flipbook maker