Trust Calibration
The process of aligning a user's trust in an AI system with the system's actual capabilities and reliability, avoiding both over-trust and under-trust.
In Plain Language
Helping users trust AI the right amount; not too much, not too little. Users should know when the AI is likely reliable and when they should double-check its work.
Why This Matters
Trust calibration is a governance objective. Your organisation needs users to trust AI the right amount. Governance frameworks should support appropriate trust through transparency, clear communication of AI limitations and regular performance reporting.
.png)
