LIME
LIME
Local Interpretable Model-agnostic Explanations. A technique that explains individual predictions by approximating the model locally with an interpretable model.
In Plain Language
A technique that explains a single AI decision by building a simpler, understandable model around it. It's like asking, "For this one customer, what were the top reasons the AI said 'no'?"
Why This Matters
LIME is a practical tool for meeting explainability requirements in your governance framework. It enables your organisation to provide decision-level explanations to affected individuals, supporting compliance with right-to-explanation obligations.
.png)
