top of page

Regulating the Unregulatable: Applying Cybernetic Principles to AI Governance


As artificial intelligence systems reshape entire industries and societal structures, we face an unprecedented regulatory challenge: how do you effectively govern systems that often exceed human comprehension in their complexity and decision-making processes?


Traditional compliance frameworks, designed for predictable industrial processes and human-operated systems, are proving inadequate for the dynamic, emergent behaviors of modern AI. The rapid proliferation of AI across critical sectors—from healthcare diagnostics to financial trading, autonomous vehicles to criminal justice algorithms—demands a fundamental rethinking of how we approach regulatory design.


Yet most current AI governance efforts remain trapped in conventional compliance paradigms: reactive rule-making, checklist-driven assessments, and oversight mechanisms that struggle to keep pace with technological innovation.


This regulatory lag isn't merely a matter of bureaucratic inertia. It reflects a deeper challenge rooted in the nature of AI systems themselves. Unlike traditional engineered systems with predictable inputs and outputs, AI systems exhibit emergent properties, adapt through learning, and often operate through decision pathways that remain opaque even to their creators.


The answer lies in applying cybernetic principles—the science of governance and control—to create regulatory frameworks that can match the complexity and adaptability of the systems they oversee. By understanding regulation as a cybernetic function requiring sufficient variety, accurate modeling, and ethical accountability, we can design AI governance systems that are both effective and ethical.


The stakes couldn't be higher. Without deliberately designing ethical requirements into our AI regulatory systems, we risk creating governance frameworks that optimize for efficiency, innovation, or economic advantage while systematically eroding the safety, fairness, and human values we seek to protect.


What regulatory approaches have you seen that effectively address AI's unique challenges?



Ray Laqua, P.Eng., PMP, is Chair of the AI Committee for Engineers for the Profession (E4P), Co-founder of ProfessionalEngineers.AI, and Founder of Lean Compliance.



bottom of page