Bias in AI
There are no small number of cautionary tales that highlight how AI has gone wrong in the past. The best thing we can do is to try to learn from these stories in order to stop them happening again. APPLE CARD The technology behind Apple Card faced allegations of gender bias in its AI driven credit assessment system. AMAZON HIRING TOOL Amazon developed an AI recruitment tool. Trained on resumes submitted over a 10-year period, the model favoured male applicants because the historical male domination of the tech industry, influenced the data. OPTUM HEALTHCARE AI from Optum, used by health systems to spot high-risk patients for follow up treatment, favoured white patients and discriminated against black patients, having been trained on historical data
from over 100 million patients where black people had received less medical attention in the past. Despite being from different sectors, these cases provide a cautionary tale for accountants as they highlight the risks of getting something wrong, even where there has been no intention to be biased. In a sense, AI is reflecting what humans already do, or at least what humans have already done. Using AI trained on historical data magnifies and reinforces its impact. So the use of historical data that is already biased, leads to further bias. Proxy bias can result when you don’t have direct datasets for something, so you use a proxy.
AI AND ETHICS | PART FOUR: BIAS IN AI
20
Made with FlippingBook Digital Publishing Software