Regulation

Canada Directive on Automated Decision-Making

Requires federal departments to assess, mitigate and publish impacts of automated decision systems. Uses Algorithmic Impact Assessment (AIA) tool with four impact levels.

April 1, 2020
Canada AI Act

Our take on this

Canada was first out of the gate with mandatory rules for government AI, and they got it right. Their Directive on Automated Decision-Making shows what happens when you move beyond principles to actual accountability. Every automated system used by federal agencies must go through an Algorithmic Impact Assessment that determines what safeguards are needed based on the impact level.

What makes this interesting for Australian organisations is the model it provides. The AIA tool they developed is open-source and freely available—many organisations worldwide use it as a starting point for their own AI assessments. It asks the right questions: What decisions is this system making? Who's affected? What happens if it gets it wrong? Then it categorises the risk and tells you what controls you need.

For you, this matters in two ways. First, if you're selling AI solutions to government, understanding this framework helps you speak their language and anticipate their requirements. Second, even if you're in the private sector, this is a tested approach to AI risk assessment that's more practical than many corporate frameworks. The Canadian model shows that transparency and accountability aren't just nice-to-haves—they're achievable with the right structure. We often recommend starting with their AIA tool when clients need a quick way to categorise AI risks.