How do societal biases get reflected in AI models?
Explain the difference between data bias and algorithmic bias.
Explain demographic parity and its importance in AI fairness.
What challenges do organizations face in implementing fairness in AI models?
Can AI systems ever be completely free of bias? Why or why not?
Explain the risks of adversarial attacks on AI models.
What measures can ensure the robustness of AI systems?
How can preprocessing techniques reduce bias in datasets?
What is in-processing bias mitigation, and how does it work?
How do you measure fairness in an AI model?
How do biases in AI models amplify existing inequalities?
Provide examples of industries where fairness in AI is particularly critical.
What techniques can improve the explainability of AI models?
What ethical concerns arise when AI models are treated as "black boxes"?
What are the societal benefits of explainable AI?