Technical Briefing: AI and Ethics

Transparency and explainability

Global AI standards often group transparency and explainability together. Transparency means that you can show what happened in the AI system, whilst explainability looks at how a decision was made. Transparency links closely with the accountant’s need to have integrity, which in turn talks about being straightforward and honest. If a business is using AI, it would be transparent to explain that this is the case. The “Recommendation of the Council on Artificial Intelligence” from the OECD states that AI actors (i.e. those involved with using AI) should commit to transparency and responsible disclosures regarding AI systems.

They should give end users meaningful information, appropriate in the context and consistent with the state of the art, in order to: Foster a general understanding of AI systems, including their capabilities and limitations. Make stakeholders aware of their interactions with AI systems including in the workplace. Provide, where feasible, plain explanations of the input, logic etc so that those affected can understand the output. Provide information to enable those adversely affected to challenge its output.

AI AND ETHICS | PART THREE: TRANSPARENCY AND EXPLAINABILITY

18

Made with FlippingBook Digital Publishing Software