The public interest intermediary that AI governance is missing
Globally federated accountants have an enforceable public interest obligation. AI governance doesn't know it yet.
In January and February 2026 I was accepted into two intensive courses with BlueDot Impact: AGI Strategy, then Frontier AI Governance. Both have selective intakes and require a personal action plan - a specific intervention, the case for why it matters, and concrete next steps. I'm publishing summaries of both action plans here because I think the argument they make is one the accounting profession needs to hear, and because making plans public invites participation to help follow through.

Iâm living and building at Network School, a community of techno-optimists designing the social, economic and governance infrastructure for internet-first countries (Network States). We all want AI to go well. Nobody here is trying to accidentally build SkyNet. But wanting AI to go well and helping to make sure it does are two different things, which is why I did these courses.
The AGI Strategy plan starts from documented present failures - Australia's Robodebt scandal and overseas equivalents - and works forward: what minimum controls must exist before AI is allowed to touch a person's access to money in a tax-transfer or UBI context, at AGI speed and scale?
Every serious AI governance proposal assumes independent, trustworthy intermediaries exist somewhere in the system. None of them explain where those intermediaries come from or what enforceable obligation binds them. My Frontier AI Governance plan answers that question.
These action plans belong together as downstream and upstream interventions to help prevent predictable social harms. Please read them in order.
Action Plan 1: AGI Strategy
Summary
This plan builds defensive infrastructure for Australia's tax-transfer and welfare system - and a future UBI context - against the specific risk of AI being deployed in decision and enforcement workflows at machine speed and scale, before the safeguards exist to contain it.
The first output is a one-page minimum safeguards checklist for any automated system that can pause payments, reroute payments, raise debts, or recover money - written for the people who build these systems, approve them, and sign off on the controls.
I'm well placed to do this: 25 years as a public practitioner working directly with government agencies' systems, controls, and evidence, currently serving on a member advisory committee of a professional accounting body. That gives me both the technical and social vocabularies to write controls that engineers can implement, and a professional platform to put them in front of the practitioners who will deal with the consequences of these systems.
The first draft is underway. The goal is a published v0.2 with clear evidence requirements and a defined role for non-government professional oversight; and tested with at least one blunt expert reviewer before it goes out.
This is downstream AI safety and AI Controls: it makes accountability and human authority verifiable in high-stakes money systems, so agentic automation cannot trigger a legitimacy crisis at scale when failures or abuse occur.
Strategic Assessment
Public trust in digital systems collapses when AI is capable enough to automate fraud and administrative enforcement at scale, faster than humans can review or contest it.
The specific threat I'm defending against is "Robodebt/Indue on steroids" - applied to Australia's current tax-transfer system, and eventually a UBI context: AI-driven government and contractor systems that can restrict, redirect, delay, or claw back people's access to money using automated decisions built on low-quality, mismatched, or adversarial inputs, with weak appeal paths and opaque accountability.
If there is no mandated human intervention and restoration plan, we get machine-speed exclusions and a legitimacy crisis when people realise there is no clear "who did this", no auditable chain of authority, and no fast way to restore access to funds relied upon for daily life. As AI systems become more capable, this becomes a balance-of-power problem: control over money access can lock people out of society entirely.
This is why my intervention sits in AI Controls and infrastructure defence rather than AI Alignment. The practical focus is two things: Proof-of-Controls with tamper-evident logging and continuous monitoring so automated actions can be verified after the fact; and Personhood and authorised delegation, meaning verified human sign-off before any agent takes a consequential action. Both must be unavoidable conditions, not optional features.
This is a distributed governance design - no institution should be expected to independently verify its own automated decisions, and government agencies are no exception. The gap is a practical threshold: a baseline and audit rubric that requires cryptographically verifiable control and verified human authority before any agent takes consequential actions with public funds.
Downstream, defensive infrastructure turns âopaque government systems that target specific populationsâ into harder-to-abuse systems by limiting scale and making harmful automation reversible and attributable to humans. This will matter as AI capabilities rise.
Theory of Change
The deliverable is a short checklist for any automated system that can affect someoneâs access to government payments or tax-transfer money, with a clear evidence list: who approved the action, what data was used, what the system did, and how it can be checked. Tested with two or three people who have worked in government payments, audit, or procurement, it becomes a practical âno go-live unless this is in placeâ standard, one that independent professionals such as tax agents can also apply from outside the agency when the system affects their clients.
The goal is traceable actions, fast challenges, and a clear process to restore access when the system is wrong. That is what reduces the Robodebt failure mode as systems become more agentic.
The riskiest assumption is whether people in governance roles will treat a concrete go-live requirement as more serious than general AI safety talk. I will test it by running those reviews and asking one question: would you use this as a go-live condition in your context? Then publish v1.1 with their edits.
This is designed as two-way accountability. Governments and vendors will keep pushing automation. Tax agents and public practice professionals deal with the fallout for real people, they understand the evidence trails and control failures. Giving them a defined role to check evidence and escalate problems early spreads accountability beyond the agency and protects the public trust that digital government systems depend on to function.
Concrete Commitments
The first output is a 500-word public post: minimum safeguards for automated decisions that affect access to public money, in the Australian context. One blunt expert review before it goes out, with a direct brief: whatâs unclear, whatâs unrealistic, whatâs missing. Then a revised version, for targeted distribution.
That is the test. If the people who build, approve, and audit these systems wonât use it as a go-live condition, it needs more work. If they will, it becomes the foundation for a broader standard.
Action Plan 2: Frontier AI Governance
Summary
I am working to establish global public accountants - bound by a codified, enforceable public interest obligation - as a missing accountability layer in AI systems that handle financial data, transactions, distributions and reporting across private enterprise and public services.
Every serious governance proposal examined in this course implicitly assumes trustworthy intermediaries exist. Almost none explain where they come from or what obligation binds them. The International Federation of Accountants (IFAC) Code of Conduct (Ethics) already does this: it requires accountants of member bodies globally to act in the public interest, not merely for employer or client. That obligation is enforceable, carries reputational and disciplinary consequences, and travels across jurisdictions. This is structurally different from voluntary AI ethics, corporate ESG, or government oversight that lacks technical capacity.
I am building this from a specific position: professional accounting credentials with a global outlook, technical fluency in AI and decentralised systems, and an embedded residency at Network School where the academy and lab launch in June 2026. The commitment for Q2 2026 is the founding document for the Accounting Technologist pathway at CREDU Academy, and the AI Safety and Governance Lab alongside it.
As an AI Safety Action Plan - this addresses the âconstrain and withstandâ defence layers as we begin building institutional capacity to verify, assure, and intervene before harm scales.
What is the public interest obligation?
The International Federation of Accountants currently represents over 188 member organisations across 143 jurisdictions. The accountancy profession is primarily self-regulating: it sets its own standards, enforces its own ethics, and disciplines its own members, with the degree of external government oversight varying by jurisdiction.
At the centre of that self-regulation is a defined public interest obligation.
IFAC's Policy Position 5 defines the public interest as the net benefits derived for, and procedural rigor employed on behalf of, all society - not the client, not the employer, not the state. Every qualified accountant operating under an IFAC member body carries that obligation as a condition of professional standing, across borders and across the systems they work in. It carries reputational, disciplinary and licensing consequences.
That is what makes it structurally different from a voluntary ethics pledge or a market commitment to AI safety.
Strategic Assessment
The failure mode I am defending against is designer-operator-judge collapse: where the same entity builds and configures the AI, evaluates whether it is safe and fit for purpose, deploys it in decisions affecting peopleâs money, identity and rights, and judges complaints when it goes wrong, with no independent professional verification at any stage.
Large firms are already recreating this through selling AI consulting to clients they audit, which is the structural conflict that produced textbook ethics case studies. More recently this conflict has been exposed through a top tier firm contracting with the Australian Tax Office and misusing sensitive government data. Without intervention, AI governance in financial systems defaults to commercially conflicted self-certification.
The pattern of harm when this verification layer is absent is already documented across jurisdictions. Australia's Robodebt Royal Commission showed exactly what scales from this failure: automated decisions without qualified oversight, contractors without public-interest duties, no independent body to verify and stop harm, and government designing, running, and judging complaints about its own system.
The Indue cashless debit card showed the same pattern: private contractors with closed systems, opaque decision rules, fragmented appeals, no portable accountability and slow processes to invoke freedom of information disclosures.
The Netherlandsâ toeslagenaffaire wrongly flagged tens of thousands of families as welfare fraudsters through automated systems, causing financial ruin with no fast correction pathway. The UK Post Office Horizon scandal saw a flawed accounting system generate false evidence leading to hundreds of wrongful prosecutions over two decades - because no independent professional verification layer existed to challenge the systemâs outputs. These are not edge cases. They are the documented baseline for what automated systems do without independent assurance, and before AI multiplies the speed, scale, and cross-border reach.
Extend that to AI systems deciding who gets paid, who gets credit, who receives benefits, who can seek remedies and who is protected. The accounting profession already has 3.5 million qualified practitioners bound by the strongest public-interest obligation in any profession, and nobody is yet deploying that mandate in AI governance.
Consider millions have lost their jobs and a Universal Basic Income becomes administered through AI systems. Eligibility is not just processed but judged, with payment decisions made by systems that adapt their own logic faster than any normal audit process can check. Exceptions are handled by a system that learns from challenges and closes them off. Millions of people depending on payments for rent, food, and daily life.
Consider what happens when those systems make compounding errors for thousands, then millions of people. When the government agency responsible cannot explain the decision logic because the system has moved ahead of them, when the non-government contractor who delivered the system and the lab behind the model is not obliged to clean up the mess, and when there is no independent professional with both the citizen-side standing and the duty to say: this decision is wrong and I can prove it, but also who should have governed its execution in the first place?
In the absence of Jedi knights and time lords, global public accountants are just what we need here. The obligation already exists, the network already spans jurisdictions, and the enforcement mechanism is already in place.
Nobody has deployed it yet.
The window is 2-4 years before entrenched path dependency makes reform significantly harder.
If a system cannot reform fast enough, something purpose-built will make it obsolete.
Theory of Change
The goal is a proof of concept for a new professional class: technically fluent, ethically bound, structurally independent, and publicly trusted. I can drive this at Network School through the Accounting Technologist Faculty and AI Safety Lab, with the founding framework published via credu.academy and a prototype Frontier AI Assurance Framework that refers to IFACâs existing standards.
If that proof of concept is credible, Proof-of-Control standards and IFAC-anchored assurance frameworks can enter professional practice before the norm-shaping window closes. That fills the trusted intermediary gap every governance proposal assumes but none builds.
Three things have to be true: Network School formalises the lab arrangement, at least one IFAC member body engages with the prototype, and a first cohort of qualified accountants can be recruited.
The riskiest assumption is whether the profession moves fast enough. The answer to that is to build outside existing institutions first and make the proof of concept impossible to ignore. Institutions adopt what they cannot afford to dismiss.
Concrete Commitments - Q2 2026
Publish founding document (3,000â5,000 words): the case for global public accountants as a missing AI governance actor, the IFAC Code as a distributed, jurisdiction-spanning human obligation network that AI governance currently lacks, and Network School as the prototype environment. Via credu.academy.
Launch the AI Safety Lab: formalise the Network School presence, establish a weekly builders group, integrated with the CREDU accounting technologistsâ allotment.
Begin targeted outreach with priority conversations, including but not limited to:
IPA Global - the natural champion for an accounting technologist pathway to its Global Certificate of Public Accounting (GCPA), with recognition of prior learning; and to position this initiative within IPAâs policy and advocacy agenda, not just its education agenda.
Senior practitioners working at the intersection of digital identity, verifiable credentials, and government procurement reform in Australia - people whose work on citizen-owned identity and accountable automated systems connects directly to the problems this initiative addresses.
Advanced AI Society - to propose the working relationship between the Proof-of-Control technical infrastructure they are leading and the professional assurance system this initiative introduces.
Third-party AI evaluators focused on independent model assessment - to explore how professional assurance standards can be mapped onto technical model evaluation, and how the two fields can strengthen each other.
Invite ten accountants to the first CREDU cohort, launching June 2026.
The Leverage Point
The technical infrastructure for accountable AI is proof-of-control, proof-of-personhood, and tamper-evident agent registries. Advanced AI Society and their blockchain and cryptography partners are building exactly this, because traceability, immutability, and verification are problems the crypto and decentralised governance community has already solved and can apply to AI.
The accounting profession's role is not to build it. It is to advance the adoption of appropriate internet architecture for safe AI, hold those who don't use it accountable, and be the human verification layer that creates legal imperative where voluntary compliance fails. Human professional controls are more expensive than programmable compliance. That cost is the incentive to get the technical infrastructure right.
Global public accountants, graduated as accounting technologists, are the Big Citizen advocates: the professional class that holds a public trust mandate alongside, but not above, government. They give insurers independent measurability, regulators an assurance standard to point to, and the public a named, accountable professional who answers for what an AI system did, in financial institutions, government procurement, and regulated service delivery.
Instead of relying on AI to self-align or governments and labs to do the right thing, this puts professional obligation and liability in the gap.
Electra Frost is an Australian international tax adviser and accountant with over 20 years in professional practice. She is a co-founder director of Digital Playhouse Foundation Ltd and a long-term resident at Network School, Malaysia, where she is building CREDU - an accounting and tax technologist faculty and AI Safety Lab. If you are a technically fluent public accountant interested in the first CREDU cohort, or if you work in AI evaluation, digital identity, or government procurement and want to have this conversation, reach out to electra@electrafrost.com - or for CREDU apply here


Thank you Electra!
Love this article and you touched on some huge issues.
I personally believe governance and compliance are going to be vital.
The oblique private institutions angle is spot on.
Hi Electra, this is a genuinely important piece.
The gap you've identified is real, every serious AI governance framework assumes trustworthy intermediaries exist, but nobody is actually building them. The Robodebt and Horizon comparisons land hard. Those weren't AI failures but accountability failures. What you are proposing is a crucial role that AI governance needs but has no name for yet.