BIAS AND FAIRNESS
ML models are influenced by the data they learn from. If the training data contains biases, the models may replicate and perpetuate these biases. Ensuring fairness in model predictions is essential to prevent unfair treatment of certain groups, which can have unethical or illegal consequences. Fairness monitoring is crucial to observe model predictions and prevent disproportionate disadvantages for specific groups or individuals. Bias detection identifies and understands inherent biases within the model or its training data. It includes analyzing input features, model decisions, and outcomes to prevent any systematic bias or discrimination. Recognizing that bias may appear subtly, such as through seemingly innocuous features that substitute for prohibited attributes is crucial.
When drift is detected Ensuring AI models' ongoing performance and reliability in real-world scenarios requires diligent monitoring and addressing of drift. Unfortunately, there is no silver bullet solution.
There are several methods for model accuracy monitoring, with some common ones being:
The model is retrained periodically using recent data, and its predictions are compared to the actual outcomes to gauge accuracy. This process often includes re-tuning hyperparameters.
Retraining with new data
In this real-time evaluation, the model's predictions are compared to actual outcomes as they occur. This method requires a mechanism for capturing predictions and outcomes in real time.
Online evaluation
This method involves maintaining a holdout data set to evaluate the model periodically. This data set should reflect recent changes in the data distribution.
Offline evaluation
© 2023 Fractal Analytics Inc. All rights reserved
04
Made with FlippingBook - PDF hosting