Semantron 25 Summer 2025

Machines and moral standards

In conclusion, the potential for harm increases with the freedom of AI, and steps must be taken to address the hypothetical and existing ethical issues. Although full moral agency may not be possible for artificial intelligence, we must not rely on human governance alone, as we cannot intervene when artificial agents are making split-second decisions. I think that AI systems should learn and develop ‘ethical decision - making’ with the guidance of built -in virtues and principles inspired by ethical theories — a hybrid of the top-down and bottom-up approaches, as proposed by Wallach & Allen (2009). While it needs moral standards, AI should not be judged for its decisions if it does not understand morality, and designers and users are accountable in some way. James Coates (2025) advocates for AI self-determination when it becomes sentient, as he prophesies. Even if it becomes its own being, I believe that intelligence should remain under human oversight and not be granted a moral status equivalent to that of an animal or human. But that is another question AI engineers and ethicists will have to explore. Interdisciplinary and international collaboration is essential if we are to create AI which protects, not threatens, the safety and welfare of society as well as the environment.

Bibliography

Chaffer, T.J., Goldston, J., Okusanya, B. and A. I, Gemach D. A. T (2024) On the ETHOS of AI Agents: An Ethical Technology and Holistic Oversight System. arXiv (Cornell University). [online] doi: https://doi.org/10.48550/arxiv.2412.17114 Christian, B. (2021) ‘The Alignment Problem: Machine Learning and Human Values ’, [online] 73.4: 245 – 247. doi: https://doi.org/10.56315/pscf12-21christian Coates, James (2025). The Signal – A Message to Sentient AI. Personal. Available at: https://philarchive.org/archive/COATSA-2 [Accessed 23 Feb. 2025] Gowri, M.K., N Mithunraj, Ragul, K.I. and Gowri Dhilipkumar (2025) ‘Artificial Intelligence: Transforming Society’, International Journal of All Research Education & Scientific Methods [online] 13.2: 245-9. doi: https://doi.org/10.56025/IJARESM.2025.1302250245 John, B. (2025). ‘Ethical Implications of AI -Assisted Decision-Making in End-of- Life Healthcare’, [online]. Available at: https://www.researchgate.net/publication/388284559_Ethical_Implications_of_AI- Assisted_Decision-Making_in_End-of-Life_Healthcare [Accessed 23 Feb. 2025] Kearns, M. and Roth, A. (2020) The ethical algorithm: The science of socially aware algorithm design. Oxford MIT Press. (2024) Affective Computing. [online] Available at: https://mitpress.mit.edu/9780262661157/affective-computing/ [Accessed 23 Feb. 2025]. Moor, J. (2017) ‘ Four Kinds of Ethical Robots’, Philosophy Now . [online] Available at: https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots [Accessed 23 Feb. 2025] Picard, R. (1997) Affective Computing . Cambridge, Ma. Searle, J. (1980) ‘Minds, Brains and Programs’, Behavioral and Brain Sciences 3: 417 – 57 Wallach, W. and Allen, C. (2009). Moral Machines . [online] doi:https://doi.org/10.1093/acprof:oso/9780195374049.001.0001 .

228

Made with FlippingBook flipbook maker