Semantron 25 Summer 2025

‘The greater the freedom of a machine, the more it will need moral standards.’ (Rosalind Picard)

Kenneth Lai

As artificial intelligence is developing, it is gaining freedom — control over its decisions and actions. Thus the question of whether it needs moral standards is becoming increasingly relevant. With the potential for harm, Picard’s statement seems obvious, but the issue is more complex than it seems. Can AI be moral and does it need to be, if it is not conscious? In this essay, I will explore the theme of morality in AI and how, with increasing autonomy, there comes increasing potential for harm. Automation is becoming more prevalent each day, from service bots that answer calls to algorithms that detect potential criminals. While this arguably makes lives easier, there will be unintended consequences — AI is unpredictable, and mistakes can be catastrophic. What if a driverless train was faced with the trolley problem? Or an autonomous weapon had to make a similar decision on a larger scale? AI making split-second moral decisions is a plausible dilemma. But AI does not have to face such conundrums to cause harm — any miscalculation can have the same outcome. In 2007, a South African robotic cannon malfunctioned, killing nine soldiers. Robots do not need to be superintelligent or sentient to be a threat to humanity: if they can make decisions that impact human lives, they are a threat. If we give a machine a goal and set it free, it will do anything to accomplish that goal, even if that comes at the expense of humans and the planet. Science fiction writer Isaac Asimov created his Three Laws of Robotics to prevent the robots in his books from harming humans. With greater freedom, ethical guidelines become increasingly important. The ethical impact of AI is not merely a concern for the future. Currently, algorithms reinforce and exacerbate inequalities with biased training data. In 2020, Robert Williams, a black man, was wrongfully arrested because a facial recognition algorithm mistook him for a watch robber. A study from the National Institute of Standards and Technology (Grother, Ngan and Hanaoka, 2019) found that the false positive rates of facial recognition algorithms in law enforcement were significantly higher for African American and Asian people than white people. Likewise, these algorithms perpetuate biases when deciding which applicants to accept for a job and which cases to prioritize in hospitals. The use of autonomous decision-making in end-of-life healthcare (John, 2025) is problematic if it prioritizes efficiency over patient preferences and lacks the emotional support and empathy that patients need. Additionally, AI trained on data from the internet can expose personal and sensitive information, failing to protect the privacy of those online; AI can also be manipulated to achieve malicious goals (e.g. cyberattacks). Hence, what we need is a balance between technological ability and ethics. It is difficult to remove the biases an algorithm has adopted from training data and its programmer, so moral standards must be implemented. As AI evolves at an exponential rate, outpacing the development of legal ethical frameworks, designers and engineers must not consider solely algorithmic optimization to achieve their goals efficiently, but also the consequences of their decisions and actions, if they are to ‘hold paramount the safety, health, and welfare of the public’, as the National Society of Professional Engineers’ Code of Ethics states.

226

Made with FlippingBook flipbook maker