Label Bias

Systematic errors in data annotations caused by the subjective judgments, cultural backgrounds or incentive structures of human annotators.

In Plain Language

When the humans labelling data bring their own biases to the task. If labelers consistently rate certain accents as "less professional," the speech AI will learn that bias.

Why This Matters

Label bias is a data governance risk that directly affects AI fairness. Your governance framework should include quality assurance processes for data labelling, including diverse annotator teams and inter-rater reliability checks.