Responsible AI signifies the move from declarations and principles to operationalization of AI accountability at the individual, organizational and societal levels. Responsible AI applies to everyone involved in the AI process and requires defined methods and roles that operationalize AI principles. Major insurance entities such as AXA and State Farm are in the early stages of responsible AI. Additionally, many data science platform vendors are starting to include explainable AI capabilities as part of a broader platform. Although financial services and insurance are considered early industry adopters, insurers, in particular, are struggling with critical talent shortages around data science and AI, which will hold back the pace of development. The skills needed to accelerate adoption of responsible AI are likely to reside in the vendors, not in the insurer, hence, why this is in the six- to eight-year range.
Mass: High
Mass is high because responsible AI will be a critical enabler for more widespread AI trust and adoption. AI is advancing in its ability to perform skilled tasks and automate processes, enabling a higher level of human-machine collaboration. However, AI assuming more responsibility necessitates trust and transparency. As the use of data and AI becomes more closely regulated, responsible AI will become critically important for organizational use. Regulators will increasingly insist that insurers ensure that AI-based pricing and rating, screening, preapprovals and fraud analytics do not incorporate bias based on banned rating factors such as gender, sexual orientation or race. Regulators are also focusing on eradicating areas that do not treat customers fairly such as price walking or credit-scores- based pricing. In time, all insurers will need to demonstrate this isn’t being coded into their AI models, while also ensuring guardrails and measures to prevent model drift. Though responsible AI will be revolutionary, it faces a number of development and organizational challenges. Fundamentally, AI (particularly DNNs) can never be truly explainable, as achieving 100% transparency would require dumbing down AI models, undercutting their functionality and use. Another critical challenge is that model bias can be reduced and mitigated, but not fully eliminated. Moreover, companies focused on developing responsible AI tools will have to compete with companies developing explainable AI capabilities into a broader platform offering.
Recommended Actions:
Gartner, Inc. | G00786204
Page 40 of 48
This research note is restricted to the personal use of abhishek.sharma@fractal.ai.
Made with FlippingBook - PDF hosting