AI Regulating AI: Are we pouring fuel on the fire?
- Raimund Laqua

- 28 minutes ago
- 4 min read
Raimund Laqua, P.Eng., PMP
Note: Link to my strategy briefing document is located at the end of the blog post.

About a year ago, I heard an AI expert suggest that we might need AI to control AI. My immediate reaction? That's nonsense.
Why would you control something uncertain with more uncertainty? It seemed like doubling down on the problem rather than solving it.
Turns out I was wrong. Or at least, I was asking the wrong question.
The Problem That Won't Go Away
I'm an engineer. I think about systems.
And when you look at AI systems through that lens, you run into a problem that won't go away no matter how you approach it: AI systems can generate millions of outputs with infinite variety across contexts that change faster than any human can track, let alone review.
This isn't something you fix by hiring more compliance people. The variety of states an AI system can occupy—all the possible outputs it could generate across all possible inputs—grows combinatorially.
A compliance officer reviewing dozens of interactions per day simply cannot match an AI system generating millions of interactions per day.
We're trying to regulate infinite variety with finite methods. The math doesn't work.
What I Missed About That AI Expert
That expert was actually right, though he probably didn't explain it in these terms. W. Ross Ashby figured this out decades ago with his Law of Requisite Variety: if you want to control a system, your regulator needs variety equal to or greater than what you're trying to control.
If your AI system has variety X, your regulatory system needs variety ≥ X. Humans don't have that variety. We're finite. AI regulators can potentially match it.
But—and this is important—my initial skepticism wasn't completely off base. We absolutely should not hand over value judgments and ethical decisions to AI systems.
The real question isn't "should AI control AI instead of humans?" It's "where do humans exercise judgment in a control system that needs to operate at AI speeds?"
The Answer Is Both Yes and No
This is what the briefing document I've written gets into. Do we need AI to regulate AI? Yes and no, depending on what you mean by "regulate."
Cybernetic theory breaks regulation into three orders:
First-order is the operational stuff—watching outputs, catching violations, stopping bad things in real-time. This is where AI has to regulate AI because humans lack the requisite variety. We just can't keep up.
Second-order is watching the watchers—making sure those first-order controls are actually working, adjusting them when things change. Both AI and humans work here, with humans providing oversight.
Third-order is the values and ethics layer—deciding what we want, what tradeoffs we'll accept, what "good" even means. This is where human judgment isn't optional. These are value judgments that only humans can legitimately make.
So yes, we need AI to regulate AI where speed and scale matter. And no, we don't give up human authority—we put it where it belongs, at the values level, not trying to manually review every output or insert deterministic validators in the AI stream.
Why This Actually Matters
This isn't theoretical. Organizations deploying AI systems have a duty of care to protect people from harm. When your control systems can't match the variety of what you're controlling, you can't fulfill that duty. There's a gap in your accountability and capability.
Right now, most organizations are doing manual oversight—reviewing samples, running periodic audits, fixing things after problems happen. Meanwhile, thousands of interactions are happening that nobody sees. Problems spread before anyone notices.
We're creating documentation of our inability to regulate, not actual regulation.
The briefing lays out why AI regulating AI isn't a nice-to-have—it's the only way to get the variety you need to actually exercise duty of care. But it also explains why human governance over values can't be negotiated away. Technical systems can implement controls. They can't decide what values those controls should serve.
What I've Learned
I'm still skeptical when people claim AI will solve everything. But I'm not skeptical anymore about needing AI to regulate AI. That turns out to be grounded in cybernetic theory that's older than modern AI.
What matters is how we architect these control systems. AI providing the variety at operational speeds. Humans maintaining authority over values and ethics. Both doing what they're actually capable of doing.
If you're trying to figure out how to govern AI systems responsibly—how to meet your duty of care when AI operates faster and bigger than human oversight can match—my strategy briefing document explains the cybernetic principles and practical approaches you can use.
The Law of Requisite Variety isn't a suggestion. It's a constraint. We can acknowledge it and design accordingly, or we can keep pretending that manual oversight will somehow catch up. It won't.
Download my strategy briefing document here:
About the Author: Raimund Laqua, P.Eng., PMP, has over 30 years of experience in highly regulated industries including oil & gas, medical devices, pharmaceuticals, and others. He serves on OSPE's AI in Engineering committee, and is the AI Committee Chair for E4P. He is also co-founder of ProfessionalEngineers.AI.


