What methodologies do you use to evaluate the fairness and effectiveness of an AI model? How do you measure the impact of AI mitigation strategies on AI model performance? What role do explainability and transparency play in your approach to AI bias mitigation? Discuss any tools (external and internal) you use for bias detection and mitigation. Discuss the trade-offs you believe between model accuracy and fairness. How do you navigate these in your work? How do you address small sample sizes or imbalanced datasets when testing for bias? How do you deal with situations where the employer cannot or has not (or not been able to) collected demographic data about applicants/employees? Can your methodology detect bias that emerges after deployment (feedback loops, drift)? How does your approach to AI bias auditing and mitigation differ from that of other consultants? Do you document limitations of your methodology, and how are these communicated to clients? Is there any specific information or resources you need before starting an audit? Does your methodology test for bias or disparate impact related to all protected categories for which data is available and testable, meaning all categories/classes that are protected under federal law or any state law, not just the categories currently required to be tested for bias under NYC Local Law 144 or any other AI anti-discrimination law? For at least California, this includes including race, ethnicity, national origin, sex, gender, sexual orientation, gender identity, religious or philosophical beliefs, age, physical or mental disability, medical condition, veteran or military status, familial status, language, or union membership.
Made with FlippingBook. PDF to flipbook with ease