As we find ourselves at the intersection of AI and ethics, we must make sure we harness AI’s power responsibly while upholding the integrity and trust central to the accounting profession. This technical briefing will take you through what AI is, how to use it ethically and the risk and governance of using artificial intelligence.
AI and Ethics TECHNICAL BRIEFINGS
AI is transforming how we work, offering unprecedented efficiency and insights, but also significant ethical challenges. How do we ensure AI operates fairly, transparently, and in alignment with professional standards? As we find ourselves at the intersection of AI and ethics, we must make sure we harness AI’s power responsibly while upholding the integrity and trust central to the accounting profession.
About accountingcpd We are accountingcpd. We help you satisfy your CPD requirements, no matter which professional body you belong to. Our CPD can qualify as verifiable and is recognised by all major accountancy bodies. It’s our job to make this easy for you, so everything you do is automatically tracked on your CPD Log. What’s more, there will be no sudden price increases, no extra costs, no hidden charges – just hassle-free CPD in one easy package. Technical updates Business advisory Professional skills Practice compliance Find out more about how we can offer you and your firm access to effective, flexible, online CPD all year round.
Copyright © August 2024 by Nelson Croom, publisher of accountingcpd.net. All rights reserved. Used with permission of accountingcpd. Contact info@accountingcpd.net for permission to reproduce, store or transmit, or to make other similar uses of this document.
Contents List
PART 1: WHY ETHICS IN AI MATTERS
4 5 6 7 9
Accounting ethics What do we mean by AI?
Wider ethical issues Frameworks and principles for AI
PART 2: USING AI IN AN ETHICAL WAY
When to use AI Creating financial reports Undertaking audit work The danger of bias
10 11 12 13 15 16 17 18 19 20 22 23 24
PART 3: ENSURING AI IS FIT FOR PURPOSE
Assessing the risks
General Data Protection Regulation Ensuring accuracy Taking responsibility Transparency and explainability
PART 4: HYBRID INTELLIGENCE
Incorporating ethics into AI tools Bias in AI Sources of bias AI ethics committees Summary
What do we mean by AI? PART 1: WHY ETHICS IN AI MATTERS
Let’s start by defining what we mean by AI. The OECD definition of AI states: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
A more user-friendly definition comes from Deloitte’s Chief AI Officer, Sulabh Soral: “At its most basic, AI is software that mimics and generates human behaviours – planning, generating ideas, understanding speech and visuals. Its ability to scale human intellect will have a profound impact.” Critically, for this review of the ethical considerations, it is important to note that unlike traditional computer programmes, where you must specifically tell the programme what to do in certain situations, AI is “trained” on a set of data, learning in a manner which might be thought to be akin to how a human learns.
AI AND ETHICS | PART ONE: WHAT DO WE MEAN BY AI?
4
Accounting ethics
Accountants around the globe are required to comply with ethical codes of practice. These are almost all based on the IESBA Code, and so share the same five core principles, which are set out in paragraph 110.1 A1 of the IESBA Code. Each of these principles relate to the world of AI: INTEGRITY Integrity requires a professional accountant to be open and honest. So for example, if you are using AI to answer a client’s request or do their work, they should know about this. OBJECTIVITY The Code specifically highlights the need not to place undue reliance on technology. It explicitly requires the professional accountant to consider whether they are being objective in their consideration of the output of AI. There can be a tendency, called “automation bias”, for humans to assume that the output from computers or automated
programmes must be reliable, when there is no, or insufficient evidence to support this. PROFESSIONAL COMPETENCE AND DUE CARE The Code requires that you are aware of the standards and relevant legislation related to any work you are doing. Clearly you would also need to be competent in using any AI tool to deliver output to your Open versions of AI might allow anyone to use the data and potentially see data that you have uploaded. Asking a service like ChatGPT for help with a client matter, for example, might involve an inadvertent employer or client. CONFIDENTIALITY breach of client confidentiality. PROFESSIONAL BEHAVIOUR If you used AI without checking the veracity of the results you could discredit the profession. Additionally, the requirement to act in the public interest would also preclude careless or inappropriate use of AI.
AI AND ETHICS | PART ONE: ACCOUNTING ETHICS
5
Wider ethical issues
Alongside accounting-specific ethical principles, there are a number of more general concerns about AI that have been raised in recent years: JOB DISPLACEMENT While AI can enhance efficiency, it may also lead to significant shifts in employment, requiring professionals to The development and operation of AI systems, particularly those requiring extensive computational resources, contribute to significant energy consumption and carbon emissions. PRIVACY AND SURVEILLANCE Data privacy is a familiar concern to finance professionals, but there are additional concerns with AI because it’s not always clear where the data will end up. AI can process vast amounts of personal data, potentially infringing on individual privacy rights if not properly regulated. adapt by acquiring new skills. ENVIRONMENTAL IMPACT
ALIGNMENT AND CONTROL AI systems must be carefully aligned with human values, ethical principles, and specific business goals to ensure they act in the best interests of society. Misalign - ment – where AI systems operate in ways that diverge from intended objectives or ethical norms – can lead to unintended and potentially harmful consequences. regulated. AI systems must be carefully aligned with human values, ethical principles, and specific business goals to ensure they act in the best interests of society.
AI AND ETHICS | PART ONE: WIDER ETHICAL ISSUES
6
Frameworks and principles for AI
There are a number of frameworks and principles for AI. OECD AI PRINCIPLES The OECD AI Principles were first adopted in 2019 and updated in 2024 to reflect new technological and policy developments. They set out certain value-based principles as well as recommendations for policy makers in using AI. They are intended to help policymakers and AI Actors create AI that is innovative, trustworthy, and respects human rights and democratic values. This list of value-based principles which clearly have various ethical issues at their heart: Inclusive growth, sustainable development and well-being. Human rights and democratic values, including fairness and privacy.
Transparency and explainability. Robustness, security and safety. Accountability. And here are the recommendations for policy makers: Investing in AI research and development. Fostering an inclusive AI-enabling ecosystem. Shaping an enabling interoperable governance and policy environment for AI. Building human capacity to prepare for labour market transition. International co-operation for trustworthy AI.
AI AND ETHICS | PART ONE: FRAMEWORKS AND PRINCIPLES FOR AI
7
The UK’s proposed regulatory framework for AI identifies three broad categories of AI risks: Societal harms Misuse risks Autonomy risks It also sets out five cross-sectoral principles for regulators to interpret within their remits:
OTHER FRAMEWORKS The EU Artificial Intelligence Act (2024) is relatively light-touch, attempting to balance the prohibition of relatively high risk activities with not wishing to stifle innovation or prevent the implementation of AI with economic and societal benefits.
Key concerns listed in the legislation include:
AI systems that deploy harmful manipulative “subliminal techniques”. AI systems that exploit specific vulnerable groups. AI systems used by public authorities for social scoring purposes. Real-time remote biometric identification systems in public spaces (with some exceptions).
Safety, security and robustness. Appropriate transparency and explainability. Fairness. Accountability and governance. Contestability and redress.
AI AND ETHICS | PART ONE: FRAMEWORKS AND PRINCIPLES FOR AI
8
When to use AI PART 2: USING AI IN AN ETHICAL WAY
There are a number of ways that you might already be leveraging AI in your role. These could include: Automating repetitive tasks like data entry, transaction categorisation, and reconciliation. Utilising enhanced data analysis, predictions and insights. Anomaly and error detection to improve accuracy and client confidence. Enhancing the client experience with AI through automated services, recommendation algorithms, or chatbots.
Whichever of these or others are in use, professional accountants need to retain a professionally sceptical mindset, and are required to consider whether they have been objective in fulfilling their role and not influenced inappropriately by technology. Think about the risks of accepting the output of AI as correct. Automation bias – the tendency to believe the results generated by technology – is one of the most common forms of unconscious bias.
AI AND ETHICS | PART TWO: WHEN TO USE AI
9
Creating financial reports
In many cases, AI might be working alongside other tools, such as data analytics tools that don’t use AI. What the AI does is bring an extra level of analysis and explanation to the results of those other tools. One of the significant tools that AI has made possible is the use of natural language processing and natural language generation. This means that AI tools can both understand normal language (i.e. it doesn’t have to be structured in nature) and indeed write it. The key rule of thumb is to treat AI like a junior colleague: try it out on tasks, but make sure you double- and triple-check all of its answers
These tools are already being embedded into standard software. Of these, the one that will make an impact on all of us is Copilot, which has been incorporated into Microsoft 365 and other Microsoft platforms. It will soon be helping you to write reports, create models, and prepare presentations. The key rule of thumb is to treat AI like a junior colleague: try it out on tasks, but make sure you double- and triple-check all of its answers – particularly to begin with – to make sure it hasn’t made any really unfortunate and obvious errors.
AI AND ETHICS | PART TWO: CREATING FINANCIAL REPORTS
10
Undertaking audit work
In recent years, audit systems have been becoming more powerful. As AI is incorporated into these systems they attempt to offer a hybrid intelligence, where the AI leaves key decisions to the human auditor. You must be the judge of whether they have got this balance right. Data driven audit systems can help automate an increasingly wide range of audit tasks. Data acquisition and ingestion (i.e. getting the client’s data into the audit system). Audit data analytics (ADA) and visualisations to understand the client’s data better.
The Irish audit regulator IAASA, published a review of the use of data analytics in Ireland’s statutory audit market. Whilst audit data analytics tools do not necessarily use AI, the report notes that AI tools continue to learn through observations of data in real time and therefore could lead to difficulties in evidencing the exact tests run on the data. It is currently a requirement for statutory audit evidence to include the details of a test run, so that work can be retested or checked if necessary. The study also notes that there is significant firm-wide programme development with digital accelerators, academies and central support of journals-based technology as key areas of focus.
Sample selection, especially with complex areas such as journals. Substantive testing using ADA.
AI AND ETHICS | PART TWO: UNDERTAKING AUDIT WORK
11
The danger of bias
AI is increasingly being used in HR and talent departments, as well as in payroll to automate tasks such as hiring decisions, recruitment and workforce management. The main issue in this area is the potential for entrenched or invisible bias. If the data the AI is trained on is historical, it could be biased against women in jobs where there have traditionally been more men, or vice versa. It may be biased against ethnic minorities, perhaps making judgements based on names. Or it might be using other filtering techniques that are not obvious to the end user. Individuals could challenge why they did not progress in a recruitment process and, if the company is unable to show that they made the decision fairly, penalties could result.
Providing more data and information, by using AI systems, can provide the human with better information on which to make decisions. However, we should bear in mind that just as humans have cognitive biases, AI can also “learn” to be biased, if it is trained on data which itself is biased. Individuals could challenge why they did not progress in a recruitment process and, if the company is unable to show that they made the decision fairly, penalties could result.
AI AND ETHICS | PART TWO: THE DANGER OF BIAS
12
Assessing the risks PART 3: ENSURING AI IS FIT FOR PURPOSE
To ensure that AI is fit for purpose, we need to understand the risks, and try to predict what might go wrong. These risks pose challenges to our ethical principles, but there are things we can do to mitigate the risks: BIAS AND DISCRIMINATION Biased results can lead to discrimination and impinge on the requirement to be objective. Consider the training data used and whether this is, itself, biased (Even if it’s not, there is a risk of data poisoning when an attacker tampers with the data to be used in training an AI model to give undesirable outcomes). Question the provider of the AI on how they have ensured that results are not biased.
Biased results can lead to discrimination and impinge on the requirement to be objective. Consider the output of the AI and whether it appears to be biased. Train staff in cognitive or unconscious bias to help them more easily identify appropriate data sets for training AI and also be alert to instances where AI might be being unduly biased.
AI AND ETHICS | PART THREE: ASSESSING THE RISKS
13
Is it a general AI tool where the policies mean that you are handing over the data for them to use as they wish?
HALLUCINATIONS AND FALSE RESULTS
Placing reliance on spurious “information” or “answers” provided by AI could breach the requirement for objectivity . It would also call into question whether you had used due care in completing the work, and compromise your integrity especially if the client was not aware you were using AI. Check “facts” before using the information provided by AI, especially when it is general AI, such as ChatGPT. Consider whether an AI tool is appropriate for use in the circumstance – is a narrower tool focusing on organisational information more likely to give a correct answer, for example? PRIVATE DATA Release of private data would breach confidentiality . It may also breach the requirement for professional behaviour if you are not compliant with the law. Consider what privacy is built into the tool. For instance, is the tool specific to your organisation where no data is permitted to be used by the AI supplier, other than in the creation of the answer?
LACK OF TRANSPARENCY If your decisions are challenged and you cannot explain how they were made, you lay yourself open to a charge of lack of professional competence and lack of due care . If you give poor advice that turns out to have a negative impact, integrity , objectivity and professional competence would all be questioned. When building or acquiring an AI tool consider how transparent the outputs are. For example, to what extent can you explain to a customer/client how the conclusions were reached? Build human intervention into your use of the tools so that your organisation doesn’t always just do what the AI tool suggests, except perhaps in low-risk areas.
AI AND ETHICS | PART THREE: ASSESSING THE RISKS
14
General Data Protection Regulation
Many of GDPR’s points are relevant in a general sense, even if you are not covered by legislation of this type. The guidance is designed to help organisations comply with their data protection requirements and it. Individuals have: A right to be informed. A right of access. A right to rectification. A right to erasure. A right to restrict processing. A right to data portability. A right to object. Rights related to automated decision-making including profiling.
Whether you are concerned with data privacy, rights, freedoms or other risks connected with AI, taking a risk- based approach is likely to provide the best response to the risks that you might face
AI AND ETHICS | PART THREE: GENERAL DATA PROTECTION REGULATION
15
Ensuring accuracy
Accountants have always worked with, relied upon and trusted technology. The term “accuracy” sounds precise, but in fact it may have different meanings in different circumstances. In some cases there is not a single correct answer. If you are trying to establish the extent of a provision required for obsolete stock, for example, whether using AI to assist or not, there is not a single figure that is right. Instead, when we talk about accuracy in this context it is about the AI producing appropriate results in the circumstances. In other cases, such as the calculation of a tax liability, accuracy is intended to be a much more precise term. We should expect AI to come up with an appropriate answer in each situation.
AI is subject to the risk of hallucinations where the answer is completely fictitious. In addition to entirely fictitious results, which are perhaps more likely to occur with generative AI with wide data training sets, there is a risk that even AI used internally could provide a result that is less than ideal for the organisation. It is vital therefore that, just like any other computer system, the results are checked against expectations.
AI AND ETHICS | PART THREE: ENSURING ACCURACY
16
Taking responsibility
Organisations must take responsibility for ensuring compliance with relevant laws, but they also need to demonstrate effectively that they are doing so.
Areas that are likely to require attention in terms of accountability include:
Leadership and oversight. Policies and procedures. Training and awareness.
Governance around the use of AI is vital.
You cannot just delegate the management of all risks arising from the use of AI to developers or the IT department. Accountability is something that must be embedded from Board level downwards. You cannot just delegate the management of all risks arising from the use of AI to developers or the IT department.
Individuals’ rights (where dealing with personal data or decisions affecting individuals). Transparency. Contracts and data sharing. Risks. Records management. Breach and issue responses and monitoring.
AI AND ETHICS | PART THREE: TAKING RESPONSIBILITY
17
Transparency and explainability
Global AI standards often group transparency and explainability together. Transparency means that you can show what happened in the AI system, whilst explainability looks at how a decision was made. Transparency links closely with the accountant’s need to have integrity, which in turn talks about being straightforward and honest. If a business is using AI, it would be transparent to explain that this is the case. The “Recommendation of the Council on Artificial Intelligence” from the OECD states that AI actors (i.e. those involved with using AI) should commit to transparency and responsible disclosures regarding AI systems.
They should give end users meaningful information, appropriate in the context and consistent with the state of the art, in order to: Foster a general understanding of AI systems, including their capabilities and limitations. Make stakeholders aware of their interactions with AI systems including in the workplace. Provide, where feasible, plain explanations of the input, logic etc so that those affected can understand the output. Provide information to enable those adversely affected to challenge its output.
AI AND ETHICS | PART THREE: TRANSPARENCY AND EXPLAINABILITY
18
Incorporating ethics into AI tools PART 4: HYBRID INTELLIGENCE
Hybrid intelligence supports ethical practices by leveraging AI’s efficiency, while ensuring human oversight in critical areas, maintaining accountability, and addressing ethical concerns like bias and transparency. The EU’s “Ethics guidelines for trustworthy AI” include a pilot version of a “Trustworthy AI Assessment List” which provides more detail on the key requirements for ensuring AI systems can be relied upon. 1 The guidelines outline a number of important questions, such as whether the organisation has carried out a fundamental rights impact assessment, to avoid any negative impact on fundamental rights.
Other useful points in the guidelines include asking whether the organisation has considered the allocation of tasks between the human and the AI. If there is human review of the output of AI, this can be used to mitigate the risks of issues arising, albeit there is still the risk of automation bias, leading the human to believe the output from the AI regardless of its robustness. Hybrid intelligence supports ethical practices by leveraging AI’s efficiency, while ensuring human oversight in critical areas, maintaining accountability.
1 https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
AI AND ETHICS | PART FOUR: HYBRID INTELLIGENCE
19
Bias in AI
There are no small number of cautionary tales that highlight how AI has gone wrong in the past. The best thing we can do is to try to learn from these stories in order to stop them happening again. APPLE CARD The technology behind Apple Card faced allegations of gender bias in its AI driven credit assessment system. AMAZON HIRING TOOL Amazon developed an AI recruitment tool. Trained on resumes submitted over a 10-year period, the model favoured male applicants because the historical male domination of the tech industry, influenced the data. OPTUM HEALTHCARE AI from Optum, used by health systems to spot high-risk patients for follow up treatment, favoured white patients and discriminated against black patients, having been trained on historical data
from over 100 million patients where black people had received less medical attention in the past. Despite being from different sectors, these cases provide a cautionary tale for accountants as they highlight the risks of getting something wrong, even where there has been no intention to be biased. In a sense, AI is reflecting what humans already do, or at least what humans have already done. Using AI trained on historical data magnifies and reinforces its impact. So the use of historical data that is already biased, leads to further bias. Proxy bias can result when you don’t have direct datasets for something, so you use a proxy.
AI AND ETHICS | PART FOUR: BIAS IN AI
20
So, for instance, you can’t ask the age of someone applying for a job, but the AI could look at the year that they graduated as a proxy. This might then result in discriminatory decisions based on age, even though age wasn’t included in the parameters. Bias can be built into the system by the AI designers too. Ask one person to set criteria for selecting a non-executive director, and they would probably provide quite a different list from someone with a different background, gender or career path. To stay on the right track, try to ensure clear delineation of AI vs human roles in decision-making. And remember to implement continuous monitoring and feedback loops to refine AI’s performance and maintain human oversight.
Using AI trained on historical data magnifies and reinforces its impact. So the use of historical data that is already biased, leads to further bias.
AI AND ETHICS | PART FOUR: BIAS IN AI
21
Sources of bias
Algorithmic bias results in unfair outcomes due to skewed or limited input data, unfair algorithms, or exclusionary practices during AI development. As we know, bias in datasets can therefore result in biased output from the AI that was trained on those datasets. AVAILABILITY BIAS A tendency to place more weight on events or experiences that immediately come to mind or are readily available than those that are not. CONFIRMATION BIAS A tendency to place more weight on infor - mation that corroborates an existing belief than information that contradicts or casts doubt on that belief. GROUPTHINK A tendency to think or make decisions as a group that discourages creativity or individual responsibility.
OVERCONFIDENCE BIAS A tendency to overestimate one’s own ability to make accurate assessments of risk or other judgements or decisions. ANCHORING BIAS A tendency to use an initial piece of information as an anchor against which subsequent information is inadequately assessed. AUTOMATION BIAS A tendency to favour output generated from automated systems, even when human reasoning or contradictory information raises questions as to whether such output is reliable or fit for purpose. To tackle any cognitive – or indeed AI – bias, it is important to step back and question what you are seeing and the conclusions that you are reaching.
AI AND ETHICS | PART FOUR: SOURCES OF BIAS
22
AI ethics committees
Larger organisations should consider establishing an ethics committee. Ethics committees allow human oversight to consider both what is to be implemented and what is currently operating, whether it is working as intended, and ensuring it is appropriate. Accounting decisions make a difference to people’s lives. AI informed decisions could result in a company or branch being closed down, incorrect tax being paid, or audit opinions that are not appropriate but which investors rely on. An AI ethics committee can be a key part of the governance structure of the organisation, considering the risks and potential mitigations, as well as the value that an AI system can bring. This can be done during development or procurement stages as well as once the system is in use.
To be effective, it must be at the leadership level, with Board and senior management fully involved. Remember, it is ultimately the Board that are accepting (or not) of the risks posed by AI and there - fore they need to be involved. There are a few clear, practical steps you can take to get started: Define the committee’s purpose and scope. Assemble a diverse team. Establish clear guidelines and policies. Implement regular review processes. Promote training and awareness.
AI AND ETHICS | PART FOUR: AI ETHICS COMMITTEES
23
Summary
AI is transforming how accountants and finance professionals work, offering unprecedented efficiency and insights. But with these advancements come significant ethical challenges that demand careful consideration. These challenges arise both at the development stage and as the AI “learns”. All five of the core ethical principles are at stake, alongside a number of more general societal issues. ACCURACY When using an AI system, you must be confident that the outputs are correct and free from bias.
TRANSPARENCY We must be open about the fact that AI has been used, and we must remain able to explain the way in which figures have been calculated and conclusions have been arrived at. RESPONSIBILITY The responsibility for data and conclusions, and for the governance structures around AI, always remains with the humans.
AI AND ETHICS | SUMMARY
24
About the author
Outside of this volunteer ICAEW involvement Julia is a well-known speaker and writer on audit, financial reporting, anti-money laundering and wider issues impacting the profession. She is co-author of the Bloomsbury publication A Practical Guide to UK Accounting and Auditing Standards and author of their Accounting Principles for Tax Practitioners. She also uses her technical writing skills to provide content for software companies, publishers and firms or networks wishing to write and implement their own policies and procedures.
This technical briefing is based on material written by Julia Penny, taken from accountingcpd.net course, AI and Ethics. Julia Penny is the Immediate Past President of ICAEW, having been a non-executive ICAEW Board member since 2017 and Chair of the Board during her year as President. Julia is also a member of ICAEW Council and a past chair of both the ICAEW Technical Advisory and Ethics Advisory Committees, a former member of the Technical Strategy and Financial Reporting Faculty Boards and of the AAT’s Council and audit committee.
Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Page 24 Page 25 Page 26Made with FlippingBook Digital Publishing Software