Semantron 25 Summer 2025

AI and rights

characterize what type of entity it is) (Gunkel 2018a: 95). This way of thinking, as proposed by the philosopher Emmanuel Levinas, means an entity’s role in human social relations and situations is more of a determining factor of that entity’s moral status than the intrinsic qualities of that entity. The presence of ‘ the Other ’ (including robots and AI systems) in a social setting may already prompt humans to an ‘ ethical response ’ (Gunkel 2018b: 167), i.e. to treat it as a person and not merely a thing. There is also the additional argument for AI rights, along the Kantian position. We grant moral rights to AI systems (or even robots without AI systems) not because they have intrinsic characteristics that demand us to give them rights, but because not mistreating them (i.e. given them the right not to be mistreated) is a reflection of ourselves as moral agents (Coeckelbergh 2020:56). We do not abuse robots or AI systems, because this corrupts our moral character. Levinas ’ and Kant’s positions both provide theoretical support to the phenomenon that many humans treat animals as pets and companions, giving them rights. If we apply Levinas ’ and Kant’s theoretical arguments to another category of the Other, it would be conceivable to give some robots and AI systems rights, particularly those with strong social relations and emotional bonding with humans. All the arguments above are based on the assumption that humans have control over AI systems (the principle of human superiority, anthropocentrism). As is human nature, humans will always be inclined to give rights to entities similar to us, including animals and indeed AI systems. Now, all AI systems are owned and managed by humans, but this may change in the future. At one point, AI systems may overtake and surpass the intelligence and learning speed of the human race, making them the dominant species (as humans once were). Will we be giving rights to AI systems in the future, or will the tables turn and will it be the AI systems giving rights to us?

Bibliography

Andorno, R. and C. Baffone (2014). ‘ Human Rights and the Moral Obligation to Alleviate Suffering ’, in R. Green and N. Palpant (eds.), Suffering and Bioethics , Oxford , 2014, pp. 182-200 BBC (2014) Animal rights. Available at: https://www.bbc.co.uk/ethics/animals/rights/rights_1.shtml (Accessed 16 February 2025) Bostrom, N. and E. Yudkowsky (2014) ‘ The Ethics of Artificial Intelligence ’, in K. Frankish and W.M. Ramsey (eds.), The Cambridge Handbook of Artificial Intelligence , pp. 316 – 334. https://nickbostrom.com/ethics/artificial-intelligence.pdf (accessed 17 February 2024) Bryson, J. (2018) ‘ Patiency is not a virtue: the design of intelligent systems and systems of ethics ’, Ethics and Information Technology 20:15 – 26. https://doi.org/10.1007/s10676-0189448-6 Coeckelbergh, M. (2018) ‘ Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering ’, Kairos. Journal of Philosophy & Science 20: 141-158. DOI 10.2478/kjps-2018-0007 Coeckelbergh, M. (2020) AI Ethics . Cambridge, Ma. Darling, K. (2012). Extending legal protection to social robots. IEEE Spectrum . https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legalprotection- to-social-robots (Accessed 15 February 2025)

224

Made with FlippingBook flipbook maker