Technical Briefing: AI and Ethics

Assessing the risks PART 3: ENSURING AI IS FIT FOR PURPOSE

To ensure that AI is fit for purpose, we need to understand the risks, and try to predict what might go wrong. These risks pose challenges to our ethical principles, but there are things we can do to mitigate the risks: BIAS AND DISCRIMINATION Biased results can lead to discrimination and impinge on the requirement to be objective. Consider the training data used and whether this is, itself, biased (Even if it’s not, there is a risk of data poisoning when an attacker tampers with the data to be used in training an AI model to give undesirable outcomes). Question the provider of the AI on how they have ensured that results are not biased.

Biased results can lead to discrimination and impinge on the requirement to be objective. Consider the output of the AI and whether it appears to be biased. Train staff in cognitive or unconscious bias to help them more easily identify appropriate data sets for training AI and also be alert to instances where AI might be being unduly biased.

AI AND ETHICS | PART THREE: ASSESSING THE RISKS

13

Made with FlippingBook Digital Publishing Software