Embracing Responsible AI

3. FAIRNESS AND EQUITY

Biases exist not only in the data we use but also in how we interpret that data. For instance, GAI models may exhibit stereotypes, such as generating images of only female nurses when prompted for images of nurses in a hospital setting. Fairness and bias in AI, including generative AI, can stem from various factors:

POTENTIAL SOLUTION

To address biases effectively, we must promote diverse and representative training data, establish clear guidelines for algorithmic design, incorporate fairness metrics, foster inclusive development teams, and continuously monitor AI systems.

Maintaining consistency becomes a significant challenge when working with intricate and diverse data sets. Ensuring consistent patterns and distributions across various samples can prove challenging. The practical application of generative models can be limited by their unreliability in producing unrealistic, implausible, or inadequate outputs that do not capture desired attributes. Moreover, generative models can produce outputs incorporating information or features absent in the training data, leading to hallucinations. For instance, the model can produce varying responses to the same prompt concurrently or at different intervals. 4. ROBUSTNESS AND STABILITY

© 2023 Fractal Analytics Inc. All rights reserved

04

Made with FlippingBook - PDF hosting