
Most AI risk inside an organisation was never built there. It arrived in a contract. Yet most AI governance programmes apply rigorous oversight to internal systems while vendor AI runs largely unchecked. This post makes the case for treating third-party AI tools as part of your AI estate and shows what genuine AI vendor governance looks like in practice.
.png)
Part 7 of 10 — AI Governance at Pace
Most AI risk inside an organisation was never built there. It arrived in a contract.
The SaaS platform with an embedded language model. The HR tool that scores candidates. The fraud detection engine your payments team adopted last quarter. The analytics suite that now shapes pricing decisions. Each of these carries AI behaviour, model assumptions and data handling choices that belong to someone else; but the consequences of getting them wrong sit squarely with your organisation.
This is the quiet risk gap in most AI governance programmes: internal systems get registered, assessed and monitored while vendor AI runs largely unchecked. If your programme governs what you build but not what you buy, you are only governing half your AI estate.
The instinct to treat vendor AI differently is understandable. You didn't build it, you can't inspect its weights and your procurement team already ran a security questionnaire. That feels like due diligence.
It isn't.
When a vendor's AI model produces a biased output, flags a legitimate customer as fraudulent or makes a decision that cannot be explained to a regulator, the legal and reputational exposure belongs to the deployer that is your organisation. The EU AI Act is explicit on this point: organisations that deploy AI systems bear accountability for how those systems operate, regardless of who built the underlying model. Australia's regulatory direction, shaped in part by APRA's CPS 230 on operational resilience and the government's ongoing AI policy development, is moving in the same direction.
When vendors supply the models, data and tools; organisations remain accountable for outcomes, compliance and trust. The fact that a third party made the choices doesn't transfer the liability.
Traditional third-party risk management was built to evaluate financial viability, data security and contractual compliance. A biased vendor AI model can trigger the same fallout as a data breach, but most TPRM frameworks weren't designed to catch it.
A security questionnaire is not an AI governance assessment. So what does a genuine assessment of an AI vendor look like?
Model behaviour and explainability. Can the vendor demonstrate how their model makes decisions? Can they show what training data was used and whether bias testing has been conducted? For high-stakes use cases (e.g. credit, hiring, clinical triage) the answer to these questions is not optional.
Data handling and jurisdiction. Where is data processed? Where are models trained or fine-tuned? Does training involve your data or aggregated customer data from multiple clients? Australian organisations dealing with personal information need to understand whether vendor AI processing is consistent with the Privacy Act and any applicable sector requirements.
Model drift and version management. AI models change. Vendors update them, retrain them and sometimes change behaviour materially without formal notice. Your assessment needs to understand how the vendor manages model versioning, what change notification commitments exist and what testing occurs before updates are pushed to production.
Reliability, availability and support. AI failures are not always binary. A model that degrades gradually, producing increasingly unreliable outputs without obvious errors, is harder to detect than a system outage. What SLAs exist? How does the vendor monitor and report on model performance?
Alignment with frameworks. ISO 42001 broadens the scope of third-party risk management by introducing specific controls for AI systems managed by vendors, suppliers and partners. Asking vendors to demonstrate alignment with ISO 42001 or the NIST AI Risk Management Framework is increasingly a reasonable expectation, particularly for higher-risk use cases. Microsoft and Google have already achieved ISO 42001 certification and increasingly, organisations are passing that expectation downstream to their own vendors.
The most effective approach is not a separate vendor AI programme running in parallel to your internal governance. It is an extension of the same one.
This means vendor AI systems should sit in your AI register alongside internally built systems. They should be assessed using the same risk dimensions (e.g. legal, privacy, security, operational) with adjustments for the limited visibility you have into a third-party system. They should be assigned owners. They should be monitored on an ongoing basis, not reviewed once and filed.
Traditional risk management models are no longer sufficient for identifying third-party AI risk. AI introduces new risks: biases, hallucinations, model drift and changes in the supply chain.
The practical implication is that your vendor risk tiering needs to reflect AI-specific factors, not just the traditional criteria of spend value or data access. Key factors worth weighting in your tiering model include:
Vendors that score high across these dimensions warrant a detailed assessment, contractual protections and a defined monitoring cadence. Lower-risk vendors can be handled through lighter-touch reviews with periodic reassessment triggers.
This is where most programmes break down. Due diligence at onboarding is necessary but not sufficient. A vendor's AI practices at the time of procurement may look nothing like their practices eighteen months later.
Continuous vendor AI governance requires three things.
Contractual protections that create ongoing obligations. Your contracts should require vendors to notify you of material changes to AI models or training data, prohibit the use of your data to train models without explicit consent, provide audit rights or documentation on request for high-risk systems and include specific commitments around bias mitigation and output reliability. Many standard SaaS agreements say nothing about any of this. Closing that gap is a governance task, not just a legal one.
Structured monitoring triggers. You do not need to reassess every vendor every quarter. You do need a system for identifying when reassessment is warranted; a model update, a regulatory change, a public AI incident involving the vendor, a change in how your organisation is using the tool or a performance signal that suggests model drift. Watch for headlines involving AI failures, lawsuits or ethical violations just as you would data breaches or regulatory actions.
Defined ownership. Someone in your organisation needs to own the vendor AI relationship from a governance perspective. This is not the procurement manager who ran the original tender. It is the business or technical owner of the use case the vendor's AI is supporting, with a clear accountability to escalate concerns and trigger reassessments when required.
The worst version of AI vendor governance is a questionnaire sent at onboarding and never revisited. This creates a false sense of control while leaving you genuinely exposed.
Existing oversight mechanisms such as SOC 2 reports or generalised risk questionnaires often lack the specificity needed for organisations to assess how the vendor is using AI, what data it relies on and whether adequate controls exist.
The solution is not more paperwork. It is a system or a register where vendor AI sits alongside internal AI, risk scores that are reviewed against defined triggers and monitoring that is proportionate to the risk each vendor represents. This does not require a large team. It requires a consistent process and the discipline to apply it.
For Australian organisations navigating AI vendor governance, ISO 42001 provides a credible and practical reference point. Its third-party AI controls are not prescriptive in a way that creates bureaucracy; they provide a structured basis for scoping what you assess, how you assess it and how often you review it. Aligning your vendor governance approach to ISO 42001 also strengthens your posture under CPS 230 and positions you well for whatever regulatory requirements follow Australia's continuing AI policy development.
We work with organisations that have invested significantly in governing internal AI and discovered; when they look closely, that the majority of their active AI risk sits with vendors they assessed once and haven't revisited.
The principle that should guide vendor AI governance is simple: if a system is making or informing decisions in your organisation, it needs to be governed. It does not matter who built it. The business owns the outcome.
Building vendor AI governance as an extension of your internal AI estate (e.g. same register, same risk framework, same monitoring discipline) is both more efficient and more effective than treating it as a separate compliance programme. It closes the gap between the AI you can see and the AI you are actually accountable for.
In Part 8, we tackle a question that becomes unavoidable once AI use case volumes grow: if you are governing AI with purely manual effort, you will always lag behind AI adoption. We explore how AI-assisted governance works in practice: from automated pre-screening of use cases and suggested risk scores based on patterns, to auto-tagging of risk types and framework requirements. We also examine the critical design principle of keeping humans in the loop for oversight and judgement, not replacing them, and how AI support in governance frees your experts to focus on the decisions that actually matter.
AI Governance — Trusenta's AI Governance product allows you to register vendor AI systems alongside internal builds in a centralised register, conduct structured multi-dimensional risk assessments and monitor your full AI portfolio, not just the systems your team built. If your current governance programme stops at the edge of what you control, this closes that gap.
Risk Management — Managing vendor AI risk requires an AI-native risk taxonomy that covers model drift, output reliability and third-party data risks, not just the generic operational and IT risk categories most TPRM frameworks use. Trusenta's Risk Management product provides exactly that, with control linkage and treatment tracking that makes vendor risk visible and manageable.
AI Governance Enterprise — For organisations with significant vendor AI exposure across multiple business units or jurisdictions, Trusenta's AI Governance Enterprise service includes a third-party AI governance framework as a core deliverable, designed to integrate vendor oversight into your broader governance operating model from the start.
Governing the AI your organisation builds while leaving vendor AI on a static checklist is not a governance programme, it is a gap with paperwork around it. The accountability for AI outcomes sits with the deploying organisation, full stop. That accountability demands the same structured oversight for vendor systems as it does for internal ones: a register entry, a risk assessment, a defined owner and a monitoring cadence proportionate to the risk. Organisations that extend their AI estate governance to cover everything they deploy, regardless of who built it, will be far better placed when regulators, auditors and customers start asking the questions that matter.
Share this with your procurement or vendor risk colleagues. The sooner AI vendor governance is treated as an enterprise risk problem rather than a procurement one, the better.
#VendorRisk #ThirdPartyRisk #AIGovernance #TechStack #TRUSENTA.IO
