Emerging Tech Impact Radar: AI in Insurance

Demonstrate cost-benefit analysis by comparing the use of synthetic data plus real data versus use of only real data to train AI models insurance use cases for property inspection, damage assessment and postcatastrophe analysis. ■

Responsible AI Back to Top

Analysis by: Moutusi Sau, Danielle Casey, Pieter den Hammer, Svetlana Sicular

Description: “Responsible AI” is an umbrella term for many aspects of making the right business and ethical choices when an organization adopts AI. Examples include:

Being transparent with the use of AI

Mitigating bias in algorithms

Securing models against subversion and abuse

Protecting the privacy of customer information and regulatory compliance ■

Responsible AI operationalizes organizational responsibilities and practices that ensure positive and accountable AI development and exploitation.

Sample Vendors: Dataiku; DreamQuark; EazyML; FICO; Google; H2O.ai; IBM; Microsoft; MOSTLY AI; SAS; TAZI.AI; TruEra

Range: Long (6 to 8 Years)

Responsible AI remains between six and eight years from early majority adoption in insurance. Insurers may recognize that responsible AI should be a near-term priority, and there is growing demand for responsible AI; however, the technologies used and the ability to fully implement responsible AI are still maturing. Product leaders delivering responsible AI are focused on providing tools that provide one or more of the following — bias mitigation and fairness, explainability, trust and transparency, or privacy and regulatory compliance. Responsible AI offerings include capabilities such as detecting data leakage, model documentation, model drift and bias monitoring, as well as guard railing and other risk management functionality. These capabilities help organizations address security, privacy and auditability concerns.

Gartner, Inc. | G00786204

Page 39 of 48

This research note is restricted to the personal use of abhishek.sharma@fractal.ai.

Made with FlippingBook - PDF hosting