top of page

Governing Large Language Models - A Cybernetic Approach to AI Compliance

I've been thinking a lot about promises lately.


Not the kind we make at year-end meetings, but the deeper promises organizations make when they deploy AI systems. Promises about safety, fairness, and accountability. Promises that become very real when something goes wrong.


The challenge with Large Language Models is that traditional compliance approaches assume you can audit the decision-making process. You write procedures, train people, create controls around logical steps you can inspect and verify.


But LLMs don't work that way. The "thinking" happens in a mathematical space we can't directly examine. You can't audit billions of neural weights the way you'd review a checklist.


This has led me back to some foundational work in cybernetics—ideas that help us think about governing systems we can't fully understand or predict.


A Cybernetic Approach to AI Compliance
A Cybernetic Approach to AI Compliance

Two insights have been particularly valuable:


First, trying to control a complex, adaptive system with rigid rules is like trying to hold water in your hands. The system will always find ways around static controls. Your governance needs to learn and adapt, or it becomes irrelevant quickly.


Second, there are different kinds of regulation happening at different levels. Some decisions can be automated effectively—checking inputs, classifying outputs, monitoring for drift. But the deeper questions about what outcomes we should permit, what risks we're willing to accept—those require human judgment. Not because the technology isn't advanced enough, but because those are fundamentally human choices about values and priorities.


Current regulatory frameworks seem to understand this intuitively, even if they don't say so explicitly. They assume technical controls operating under human oversight—automated compliance within human-defined boundaries.


This changes how I think about AI governance. Instead of trying to make the black box transparent, we focus on governing what we can actually control: what goes in, which models we choose, what comes out. We build learning systems around the opacity rather than trying to eliminate it.


For those of us working in regulated environments, this offers a more realistic path forward than waiting for "explainable AI" to solve our governance problems.


I've been working through these ideas in more detail—how cybernetic principles apply to AI governance, what this means for compliance frameworks, and how to implement these approaches in practice.


You can read more in my latest briefing note which you can download here:




© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page