Where and how model monitoring factors into responsible AI performance
EDA, Pre-processing and feature engineering and selection
Business understanding and hypothesis testing
Model development
Model evaluation
Model selection
Input data
RAI definition
Data privacy
Data bias
XAI & privacy
Model bias & privacy
Model management
Model accountability
Deployment
Prediction
Monitoring
Data privacy
XAI, Prediction bias
Bias, XAI & Drifts
Drift and model accuracy Drift refers to the phenomenon where the performance of an ML model degrades over time due to changes in the input data distribution. It occurs when the patterns or characteristics of the data used for training the model differ from those observed in the data the model encounters in the real world. The data distribution shift can compromise the model's accuracy and effectiveness, hindering its ability to make precise predictions on unfamiliar data.
There are four main categories of drift, each with its unique causes and challenges.
Concept drift This occurs when the statistical characteristics of the variable being predicted by the model undergo changes over time. Real-world scenarios, market trends, and customer behaviors are dynamic and can vary. When the patterns a model learns during training no longer apply, the model's accuracy can decrease.
Anomalies and outliers Identifying anomalies or outliers in data or model predictions can indicate problems such as pipeline bugs, data distribution shifts, or model issues. Detecting these anomalies is challenging, especially in complex data or when the model's standard behavior is unclear.
© 2023 Fractal Analytics Inc. All rights reserved
02
Made with FlippingBook - PDF hosting