Interpretability

The degree to which a human can understand the internal mechanics and cause-and-effect relationships within an AI model's decision-making process.

In Plain Language

How easy it is to look "under the hood" of an AI and understand its logic. A simple decision tree is very interpretable; you can follow the yes/no branches. A massive neural network is much harder to interpret.

Why This Matters

From a governance perspective, interpretability determines how effectively your organisation can audit, validate and oversee AI systems. More interpretable models are easier to govern and less likely to harbour hidden risks.