Guardrails

Software mechanisms and constraints implemented around AI models to prevent harmful, biased or policy-violating outputs and ensure safe, responsible behaviour.

In Plain Language

Safety barriers built around an AI to prevent it from doing harmful things. Like guardrails on a highway; the AI can operate freely within bounds, but can't veer into dangerous territory.

Why This Matters

Guardrails are a practical implementation of AI governance policies. They translate high-level principles into enforceable technical controls that prevent AI systems from producing harmful or non-compliant outputs.