Content Moderation (AI)

The use of AI systems to review, filter and manage user-generated content on platforms to enforce community guidelines and prevent the spread of harmful material.

In Plain Language

Using AI to review user-generated content at scale; flagging hate speech, violence, misinformation, etc. No platform can manually review billions of posts, so AI does the first pass.

Why This Matters

AI-powered content moderation creates its own governance challenges around accuracy, bias and transparency. If your organisation uses AI for moderation, your governance framework should address error rates, appeals processes and regular audits.