Semantron 25 Summer 2025

Machines and moral standards

But to what extent should machines have moral standards? Machines lie on a matrix with two dimensions: autonomy and ethical sensitivity (Wallach and Allen, 2009). They can be fully controlled by designers and users (operational morality), able to choose their responses (functional morality) or even have full moral agency. For example, AI agents can be programmed with rules to follow or have built-in virtues to avoid unethical outcomes, defined as implicit ethical agents by philosopher James Moor (2009). Kearns & Roth (2020) suggest that ethical constraints should be directly embedded into algorithms — however, figuring out what those constraints may be and how to embed them is not a simple task. Alternatively, they can use information to predict outcomes and create a response. This requires them to analyse possible scenarios or detect neglect of duty, to behave within moral standards. When faced with a dilemma, they can ‘calculate’ the response that minim izes harm and maximizes good or choose to follow their duty. These are called explicit ethical agents. Full ethical agents as those with metaphysical characteristics such as consciousness, intentionality and free will, the possibility of which is debatable. I believe that the ‘level of morality’ of a machine depends on what it is used for and its abilities, although all autonomous agents will require moral standards. An autonomous military drone can kill, so it will need to act within ethical boundaries. Whil e it appears to violate Kantian ethics, its ‘duty’ can be defined as its intended purpose. I think a self- driving train should be an explicit ethical agent, so it can ‘reason’ and come up with its own creative response instead of following rules, which may be difficult to apply to all scenarios. A customer service bot, on the other hand, does not have direct control over life and death and does not need the same level of ethical sensitivity. An algorithm could be ‘taught’ examples of ‘fair’ and ‘unfair’ so it can create its own definition and act fairly (Christian, 2020). Rosalind Picard (1997) argues that machines need, not just laws, but values and principles to guide them. Virtues are permanent and stable, whereas rules easily come into conflict with each other. Machine morality is desirable, but are true artificial moral agents (AMAs) even possible, and do we need them? For AI to have full moral agency, it must be able to understand ethics. John Searle, in his Chinese room thought experiment, argues that, no matter how intelligent and sentient they may seem, computers cannot truly ‘understand’ anything ; they are only capable of following algorithmic instructions. He refutes the claim that the human brain is a computer without metaphysical qualities. While artificial intelligence can mimic human decision-making, it does not and may never have the consciousness that allows us to be moral beings. If AI needs moral standards, would a toaster not need them too? After all, AI is only a more complex machine, and a toaster can cause harm. Machines also lack emotions, responsible for much of human decision-making. Even though designers and engineers can use precisely specified algorithms that emulate human reasoning (Kearns & Roth, 2019), human thought processes and intuitions are complex and difficult to understand, much less replicate in a machine. Moreover, machines with moral agency may not be necessary to prevent harm. Assuming they are tools that follow programmed instructions, the moral responsibility of a decision should belong to those who created the algorithm, as any mistakes would be due to flaws in the algorithm. As with any tool, human oversight is necessary to prevent harm. The EU Artificial Intelligence Act states that ‘High - risk AI systems shall be designed and developed in such a way […] that they can be effectively overseen by natural persons during the period in which they are in use.’ Therefore, AI does not need to be moral in nature, and human governance should be implemented to reduce harm.

227

Made with FlippingBook flipbook maker