AI Bias

Module: ethics

What it is

AI bias refers to systematic errors in AI systems that produce unfair outcomes—favouring or disadvantaging particular groups of people. Bias can emerge from training data, algorithm design, or how the system is deployed. It's not always intentional but can have real-world harmful effects.

Why it matters

Biased AI can perpetuate and amplify existing inequalities. When AI is used for hiring, lending, healthcare, or criminal justice, biased outputs affect real lives. Understanding AI bias helps you recognise when outputs might be unfair and why diverse perspectives in AI development matter.