
Karen Hao's third critique of AI empires is that they monopolise knowledge production, capturing the researchers and filtering the public understanding of what AI can and cannot do. Inside most enterprises, a parallel information problem plays out with shadow AI: ungoverned tools making consequential decisions that nobody in governance can see, evaluate or account for.

Karen Hao's third critique of AI empires, after data extraction and labour exploitation, is the monopolisation of knowledge production. AI companies, she argues in the 26 March 2026 Diary Of A CEO episode, employ or fund the majority of scientists who would otherwise independently evaluate AI capabilities and limitations. The information available to governments, regulators, civil society and enterprises about what AI systems can and cannot do is not independent. It is filtered through the institutions that built the technology and have a commercial interest in controlling how it is understood.
This is a specific and serious governance problem at the industry level. It is also a pattern that has a precise internal equivalent inside most Australian enterprises. The ungoverned AI tools operating across business units, which nobody in the governance function can see, classify or evaluate, create exactly the same information asymmetry inside your organisation that Hao documents at the industry level. Decisions are being made. The people who should be accountable for those decisions do not have the information they need to govern them.
This is Part 3 of our series on what Karen Hao got right. Part 1 examined the AGI definition problem and the case for building internal governance capability. Part 2 addressed the vendor governance risks that flow from Hao's documentation of AI's hidden supply chains. This post addresses what the knowledge monopoly problem means for shadow AI governance inside Australian enterprises.
Hao describes three mechanisms through which AI companies control what is known about the technology they build.
They employ most of the researchers. The scientists with the deepest understanding of AI system capabilities, failure modes and risks are, in the majority, employed by the companies building those systems. Independent academic AI research exists but is vastly underfunded compared to the corporate research budgets of the major AI companies. This creates a structural filter on what knowledge is produced and what questions are asked.
They control access to models and data. Research on AI systems requires access to those systems and to the data they were trained on. AI companies grant or withhold that access. The research that gets done is, to a significant extent, the research the companies permit to be done.
They shape regulatory conversations. Hao documents how AI company executives present to Congress, to the European Parliament and to Australian government bodies with narratives calibrated to shape the regulatory outcome they prefer. The expertise asymmetry between legislators and AI companies gives those companies enormous influence over the frameworks designed to govern them.
The practical consequence for enterprise governance is that the information your organisation receives from AI vendors about what their systems do, how reliably they do it and what the failure modes are is not independently verified. It is vendor-curated. Governance frameworks built on vendor claims, without independent risk assessment, are built on a foundation that Hao's reporting gives substantial reason to question.
The knowledge monopoly Hao describes at the industry level has an almost identical structure inside most enterprises. AI tools are being deployed by business units without the knowledge of governance teams. Decisions are being made by systems that the people responsible for risk management cannot see, evaluate or account for.
This is the shadow AI problem. And a January 2026 survey found it in 98% of organisations. Three in four CISOs have already discovered unsanctioned AI tools running in their environments, many with embedded credentials and elevated system access that nobody in the governance function was tracking.
The information asymmetry inside your organisation is, in practice, the same problem Hao is describing at the industry level. The people who should be accountable for AI decisions do not have the information they need because the decisions are being made outside the visibility of governance systems. The producers of those decisions, business unit leaders and individual employees using unsanctioned tools, are not withholding information maliciously. The governance infrastructure that would capture it simply does not exist.
Hao's prescription for the industry-level knowledge problem is external pressure and bottoms-up governance from those affected by AI decisions. The prescription for the enterprise-level knowledge problem is internal: build the visibility infrastructure that makes shadow AI governable.
The information asymmetry created by shadow AI is not just an operational risk. It is a compliance problem under frameworks Australian organisations are already accountable for.
The Privacy Act automated decision-making obligations arriving in December 2026 require disclosure about AI systems making or influencing decisions about individuals. You cannot disclose what you do not know exists. An AI tool operating in a business unit without governance visibility is creating disclosure obligations that cannot be met, because the governance team does not have the information needed to meet them.
APRA regulated entities have governance expectations that include understanding and documenting the AI systems operating within their risk perimeter. The governance expectation is not limited to IT-approved systems. It extends to the AI operating environment of the institution. Shadow AI inside a regulated entity is a governance gap regardless of whether IT knew about it.
The Guidance for AI Adoption published by the National AI Centre applies a whole-of-lifecycle principle: governance from intake through deployment through ongoing monitoring. Shadow AI bypasses the intake stage entirely and has no ongoing monitoring because no governance structure knows it is there.
The response to the shadow AI information asymmetry is the same as Hao's response to the industry knowledge monopoly: create independent visibility rather than relying on the self-reporting of those with an interest in limited disclosure.
In the enterprise context, that means four things your organisation needs to build as an innate capability, not a one-off project.
An AI inventory that goes beyond what IT approved. Surveying business units directly, auditing SaaS platforms for embedded AI features and asking vendors to disclose what AI capabilities exist in contracted services. The inventory is the prerequisite for everything else. You cannot govern what you cannot see, and you cannot see what you have not systematically looked for.
A risk classification applied to every use case in the inventory. The classification determines what oversight, documentation and monitoring each system requires. A tool that summarises internal meeting notes needs different treatment from one influencing credit decisions or employment outcomes. The classification also identifies which systems create the most significant Privacy Act ADM disclosure obligations, so compliance preparation can be prioritised.
A structured intake process that intercepts new AI initiatives before deployment. This is the control that prevents the shadow AI inventory from regenerating. Without it, the discovery exercise becomes an annual catch-up rather than an ongoing governance control. The intake process does not need to be burdensome. It needs to be real and consistently followed.
Ongoing monitoring of deployed systems. The Netskope 2026 data found that the average enterprise experiences 223 data policy violations per month related to AI usage. Monitoring converts governance from a paper exercise into something that actually manages risk, catching problems before they become incidents.
Hao's core argument is that the public should not have to rely on AI companies to understand AI. That knowledge should be independently produced, independently verified and accessible to the communities affected by AI decisions. At the enterprise level, the equivalent argument is that governance teams should not have to rely on business units voluntarily disclosing what AI they are using. The governance infrastructure should create visibility systematically, as an organisational capability, rather than depending on information flows that are inconsistent at best and invisible at worst.
The organisations that treat shadow AI governance as a core internal capability, one that runs continuously and improves over time, will have the information they need to govern AI decisions when it matters. The organisations that do not will be in the position Hao describes regulators occupying: making important decisions about AI with information filtered through the institutions they are supposed to be governing.
AI Governance: Trusenta's AI Governance platform provides the use-case intake and inventory infrastructure that creates systematic visibility across an organisation's AI landscape, replacing the information asymmetry of shadow AI with a governed, documented and continuously maintained register of what AI is running and how it is classified.
Risk Management: Once visible, AI systems need risk classification with documented treatment plans. Trusenta's Risk Management module provides the AI-specific taxonomy and scoring methodology to do this consistently across the portfolio, including the vendor supply chain and data provenance dimensions Hao's reporting highlights.
AI Governance Foundations: For organisations that need to build governance infrastructure from the ground up, this 10-day engagement establishes the intake processes, risk classification framework and accountability structures that make shadow AI governable from the outset rather than after problems surface.
