top of page

SEARCH

Find what you need

581 results found with an empty search

  • First Principles of Design: Necessary Variation

    If you work in quality or lean, you have been trained to treat variation as the enemy. Deming, Taguchi, Six Sigma — the entire discipline is built on reducing, controlling, and eliminating variation. And that discipline is not wrong. But it is incomplete. Without variation, you cannot have two of anything. If no variation were permitted — if every instance of a thing had to be absolutely identical in every respect — production would be impossible. Every piece of raw material is slightly different. Every cut, every weld, every assembly happens under slightly different conditions. Variation is not a defect in the manufacturing process. It is the precondition for manufacturing to exist at all. It is what makes multiplicity — multiple instances of the same thing — possible. The question was never whether to have variation. The question is which variation is necessary and which is not. And answering that question requires something that comes before any control chart or process capability study: you have to decide what the thing *is* — and what it is not. Identity: Deciding to Build This and Not That Before a single sketch is drawn, someone decides that the world needs a hammer and not a spoon. This is an ontological commitment — a decision about what will exist and what won't. It establishes the boundary between what you are building and what you are not building. That commitment carries a second obligation: defining what is essential for this thing to be this thing. A hammer requires a handle, a weighted head, a striking surface. Remove any of these and you no longer have a hammer. You have a stick, a paperweight, something else entirely. These are the characteristics without which the thing ceases to be what it was committed to be. Everything that follows depends on these choices. Multiplicity: Designing What This Is and What This Is Not With the essentials established, the engineer faces a design decision: what must be allowed to differ so that you can build more than one? You cannot use the same piece of steel twice. You cannot use the same piece of wood twice. Every unit requires its own instance of material, its own act of assembly, its own moment in time — and no two instances are identical. Head weight within a given range, handle length within a given tolerance, surface finish within acceptable limits. These are not concessions to imperfect manufacturing. They are what makes multiplicity possible. Without designed variation, you can build one thing on paper. You cannot produce it in the world. Without both — without defining the identity and the acceptable variation — you cannot produce a single unit, let alone a thousand. The Rub Here is where engineering demands expertise. Specify too precisely and you cannot build the thing. Real materials vary. Real processes drift. Real conditions fluctuate. If every tolerance is pushed to its theoretical limit, you have designed something that can only exist on paper — the variation inherent in parts, materials, and assembly will exceed what the specification allows. You will reject everything. You will build nothing. Specify too loosely and you build things that are not the thing. Units come off the line that technically pass inspection but fail in the field. You have non-conformances that you cannot call non-conformances, because the specification never drew the line clearly enough to say what conforms and what does not. The engineer's expertise lives in this tension: defining identity tightly enough that the thing remains itself, and defining variation broadly enough that it can actually be made. Every tolerance, every acceptance criterion, every specification range is a negotiation between the ideal and the achievable. Get it wrong in either direction and you lose. Over-constrain and production stops. Under-constrain and quality disappears. Why This Matters for Compliance In a previous post — Compliance and the Problem of Evil — I argued that every compliance failure is an absence: the privation of a good that ought to be present. But I left a question hanging: where does that positive definition come from? The design. The design is the positive definition. It declares both what something is and what it is not — the identity and the acceptable variation, the boundaries within which a thing remains itself and beyond which it becomes something else. Without both declarations, the concepts of defect, failure, and non-compliance have no anchor. A defect is not "something that looks wrong." It is variation outside the boundaries the design established. A safety failure is not "something bad happened." It is the absence of a capability the design required to be present. This is the bridge between engineering and compliance. The engineer designs the good — the identity *and* the necessary variation — and compliance is the discipline of sustaining both through production, operation, and change. Quality, safety, security, sustainability — each is a dimension of that design, a promise about what the thing will be, what it will not be, and what it will continue to be. No design, no identity. No identity, no boundaries. No boundaries, no way to tell necessary variation from unwanted variation — just randomness wearing a label. First Principles Engineering is about building things. But building always starts with a design — the act of defining what something is and what it is not, what must remain the same and what must be allowed to differ. This is what makes it possible to know what is a defect and what is not. What is safe and what is not. What is secure and what is not. What is compliant and what is not. Without the design — without defined identity and defined variation — none of these judgments have a foundation. They are opinions, not assessments. The first principle of design is knowing which variation to control and which to permit. Get that right, and every downstream judgment — quality, safety, security, sustainability — has a basis. Get it wrong, and you are either unable to build or unable to know what you have built. When it comes to design, you have to do more than decide between this and that. You have to decide what this is — and what it is not.

  • You can't turn lagging into leading indicators no matter how hard you try

    Lagging versus Leading Indicators The Challenge Counting near misses, incidents, defects, violations, and other non-conformance is of value and necessary as part of prescriptive: regulation, industry standards, and internal policies. However, when it comes to complying with performance and outcome-based commitments where the goal is to achieve zero fatalities, zero explosions, zero violations, and zero defects then you need a risk-based process that uses proactive actions informed by both lagging and leading indicators. While many companies are rich in lagging indicators they are poor in leading indicators. To address this, many attempt to turn lagging indicators into leading indicators which is not possible no matter how hard you try. Although, with proactive oversight you can turn lagging indicators into leading actions (more on this later). Many organizations try to use measures of conformance to predict and possibly prevent future occurrences. However, lagging indicators of this kind can never distinguish between whether your risk controls are effective or if you were just "lucky". They are also too late to prevent what has already occurred and for those looking to improve safety, quality, environmental, or regulatory outcomes this is a big deal. Lagging Indicators and Actions Lagging indicators measure what has already happened specifically after a risk event has occurred. Lagging indicators are always retrospective, too late, and of no value with respect to the past events. Lagging indicators are still beneficial as they help to identify failure modes or vulnerabilities albeit after the fact. This data can in turn be used to initiate actions to mitigate the effects of the adverse event, which is considered as a corrective and lagging action. Lagging indicators can also be used to strengthen control processes to prevent re-occurrence of the unwanted event or mitigate its effects. This is a preventive action and leading with respect to future risk . Leading Indicators and Actions Leading indicators , on the other hand, are derived from the control processes that are in place to prevent unwanted events before they happen. They are on the left side of the bowtie diagram and before the risk event. Leading indicators include measures of effectiveness of the preventive controls which are predictive in terms of the likelihood of a given risk event. Leading indicators must have predictive power to be considered effective. The effectiveness of controls contributes to the probability of occurrence of the risk event. Leading actions are steps taken to improve the effectiveness of both preventive and mitigative controls to improve the level of protection to achieve an acceptable level of risk which is the purpose of risk management and the standard for overall compliance effectiveness. Bottom Line Lagging indicators can never be leading as they measure things after the risk event. They may have utility to predict future risk events but this is limited as they often measure things related to symptoms not the root cause. The best leading indicators are those that have predictive utility and connected to preventive controls. This information provides advance warning of a possible risk event and an opportunity to do something about it.

  • Promise Agents: Autonomous Policy Fulfillment in Security Architecture

    The systems that run our world make implicit promises — to route traffic, to process transactions, to keep data where it belongs. Most of those promises are never explicitly declared, never monitored against, and never reported on until something breaks. Promise Theory, the framework Mark Burgess developed to model autonomous commitment, sits at the heart of the Lean Compliance methodology. This briefing extends it further, asking what becomes possible when security infrastructure is designed to keep its promises the way we expect people to keep theirs. Most current thinking places AI at the monitoring or response layer: detecting anomalies, flagging incidents, accelerating analyst workflows. That is useful, but it still treats the underlying security equipment as passive infrastructure, governed by static rules and assessed from outside. Mark Burgess, who developed Promise Theory and built CFEngine on its principles, had a different intuition — one rooted in a security problem he identified before most of us were thinking about it. His observation was that the command-and-control model of managing devices was itself producing vulnerabilities. A device designed to receive and execute external commands is a device that can be exploited by anyone who can issue those commands. His response was to model a different design principle: devices that govern themselves from within by declaring what they will do, rather than waiting to be told. Autonomy, in his framework, is not just an architectural preference. It is a security property. He found a concrete example of this already operating in live infrastructure: BGP — the Border Gateway Protocol that governs routing between the large independent networks that make up the internet. BGP routers do not wait for a central controller. They declare their routing promises to neighboring routers and cooperate through voluntary exchange of those declarations. Burgess states this directly: "BGP is a promise-based system." Each router is already a promising agent, governing itself from within, building trust through its history of kept promises. That is the design principle. The question worth exploring is what it would mean to apply it to security obligations — not routing tables, but the high-level commitments an organization makes about what its infrastructure will and will not allow. I have written a briefing note that develops this as a formal proposal: **Promise Agents** — security equipment with embedded, fine-tuned AI models that receive obligations, assess what they can genuinely commit to, declare those commitments as promises, fulfill them autonomously, and monitor their own performance against them continuously. The briefing covers the theoretical foundation in Promise Theory, the BGP precedent Burgess himself identifies, the problem that makes this direction worth considering, the architecture it implies, and the prerequisites that would need to be in place before it becomes buildable. It is offered as a starting point for discussion — not a finished design, but a direction worth examining for security architects, compliance practitioners, AI engineers, and equipment vendors who may see potential in it. The full briefing note is linked below. I would welcome responses from anyone working in these areas. Raimund (Ray) Laqua, P.Eng., PMP is the founder of Lean Compliance Consulting, helping organizations build compliance as operational capability rather than procedural overhead. He serves on ISO's ESG working group and OSPE's AI in Engineering committee, and chairs the AI Committee for Engineers for the Profession (E4P), where he advocates for federal licensing of digital engineering disciplines in Canada.

  • The Great Software Reset

    How Enshittification, the Collapse of the Abstraction Stack, and AI Are Rewriting the Rules — and Why Governance Will Determine What Comes Next Raimund (Ray) Laqua, P.Eng., PMP Something is breaking, and something else is being born. I think we need to talk about both. If you work in technology, or if your business depends on technology — which is to say, if you run a business — you’re caught between two forces that are about to reshape everything. One is tearing down the model we’ve relied on for decades. The other is building something we don’t fully understand yet. And the space between those two forces is where the most important decisions of the next decade will be made. I want to walk through what I’m seeing. Not as a futurist making predictions, but as a computer engineer with over thirty years in heavily regulated industries — someone who has spent a career at the intersection of technology and operational governance. What I see concerns me. Not because change is coming, but because the implications are moving faster than our ability to manage them. The Diagnosis: Enshittification Cory Doctorow gave us the word, and it stuck because it’s accurate. Enshittification describes the lifecycle that digital platforms follow with remarkable consistency. First, they’re good to users — generous, useful, even delightful — because they need to attract them. Then, once the users are locked in, the platform shifts value to business customers — advertisers, vendors, enterprise clients — because they need to attract them too. Then, once both sides are captive, the platform begins extracting all remaining value for itself. Features degrade. Prices rise. The experience hollows out. And everyone stays because the switching costs are too high. We’ve watched this happen with Amazon, Facebook, Google, and countless SaaS platforms. It’s not a bug in the system. It’s the system working exactly as designed. The incentive structure of platform capitalism leads here inevitably. If you’re a business leader, you already feel this. You’re paying more for software that does less for you. You’re locked into ecosystems that serve the vendor’s roadmap, not yours. You’re managing integrations between platforms that were designed to be sticky, not interoperable. And every year, the value you extract from these relationships diminishes while the cost increases. That’s the diagnosis. The current model is failing. Not catastrophically, not all at once, but steadily and predictably. The question is what replaces it. The Mechanism: The Collapse of the Abstraction Stack For seventy years, software development has been built on layers of abstraction. Machine code gave way to assembly language. Assembly gave way to high-level languages. Those gave way to frameworks, platforms, orchestration layers, and cloud services. Each layer made it easier for humans to tell machines what to do, but each layer also added distance between the intent and the execution — and each layer became a place where someone could extract rent. AI is now collapsing those layers. We’re watching AI move toward writing directly for machine-level execution, skipping the programming language step entirely. AI is creating solutions directly — not writing code that a developer then compiles, tests, debugs, and deploys, but generating functional outcomes from specifications. The orchestration layers that human developers have built and maintained are becoming unnecessary, at least from a human software development perspective. Think about what that means for the platform model. Every layer of the abstraction stack is a layer where a vendor can insert themselves, charge a fee, and create lock-in. The programming language ecosystem. The framework. The cloud platform. The CI/CD pipeline. The monitoring service. The SaaS application sitting on top. Each is a tollbooth. If AI collapses those layers — if it can go from intent to execution directly — then it doesn’t just change how software is built. It removes the structural foundation that platform enshittification depends on. You can’t extract rent from a layer that no longer exists. The Reset This is where the two forces converge, and this is why I believe we’re looking at a genuine reset in software application development. Enshittification creates the demand for a reset. Users and businesses are fed up, locked in, overcharged, and underserved. The existing model has exhausted its goodwill. People are ready for something different — they just haven’t had a viable alternative. The collapse of the abstraction stack creates the supply. AI-driven bespoke generation means you no longer need the platform to get the solution. The intermediary layer — the SaaS vendors, the platform ecosystems, the app stores, the enterprise software companies extracting rent from captive customers — gets compressed or bypassed entirely. We’re moving from mass-produced software toward bespoke, personal solutions generated on demand. The cloud, which was supposed to be the great centralizer, becomes instead the great personalizer — raw compute and capability that AI draws from to build whatever is needed, whenever it’s needed. Every business, potentially every person, runs on systems tailored precisely to their context. The SaaS model — the dominant business model in technology for the past two decades — starts to look like a transitional artifact. Something we did because we hadn’t figured out something better yet. And the enshittification that Doctorow described wasn’t a corruption of that model. It was its natural endpoint. The Danger: Trading One Problem for Another Now here’s where my concern deepens. Because a reset doesn’t mean things get better automatically. It means the rules are being rewritten. And if we’re not thoughtful about how they’re rewritten, we could end up somewhere worse. In a world where every business runs on bespoke AI-generated systems, you gain extraordinary customization. Every solution fits like a glove. Every workflow is optimized for the specific context it serves. But you also lose something critical: standardization, interoperability, and the ability to look under the hood. If no two systems are alike, how do they talk to each other? If the “code” was never written in a human-readable language, how do you audit it? If the AI generated a solution directly from a specification, and that solution is running your financial transactions, or monitoring your pipeline integrity, or managing your patient records — who verifies that it’s actually doing what it’s supposed to do? Traditional software validation frameworks were built on a fundamental assumption: that there is human-readable code to inspect. Remove that assumption, and those frameworks collapse. And here’s the deeper risk: if AI model providers become the new platforms, the same enshittification cycle could repeat at a more fundamental layer. Instead of being locked into a SaaS vendor’s ecosystem, you’re locked into a model provider’s infrastructure. Instead of opaque algorithms deciding what you see on social media, opaque AI systems are running your core business operations. The extraction doesn’t happen at the application layer anymore — it happens at the generation layer. And with even less transparency. We don’t escape enshittification by collapsing the abstraction stack. We escape it by governing what comes next. The Questions You Will Eventually Ask This brings me to the practical reality that every business leader will face, whether they’re ready for it or not: Are all your AI agents, AI systems, and AI-powered applications actually operating between the lines? Are they aligned to your business goals — or just running and consuming power and money? How much are you really spending, and what’s the expected return? Will your business even be viable going forward, and what reengineering is needed to compete in an AI-powered world? These aren’t hypothetical questions. They’re operational ones. And most organizations don’t have a framework to answer them — not because they’re negligent, but because the frameworks haven’t been built yet for this new reality. You Can’t Govern Code That Doesn’t Exist This is the insight I keep coming back to. If AI is generating solutions directly — bypassing human-readable code, bypassing traditional development pipelines — then you cannot govern these systems the way we’ve governed technology for the past half-century. Code review doesn’t work when there is no code to review. Static analysis doesn’t work when there is nothing static to analyze. What does work is operational governance. Governing the behavior. Governing the outcomes. Governing the promises. This is the discipline I’ve spent my career building. In the most heavily regulated industries — pharmaceuticals, medical devices, oil and gas, chemical processing, financial services, government — I’ve learned that compliance at its best is never about inspecting artifacts after the fact. It’s about building operational systems that ensure promises are kept in real time. Promises to regulators. Promises to customers. Promises to every stakeholder who depends on you doing what you said you’d do. That same operational discipline is exactly what AI governance demands. And as the abstraction stack collapses and the reset unfolds, it may be the only governance that works. Engineering Discipline in a Post-Code World I’m a computer engineer by training. I understand the technology at a fundamental level — not just how to use it, but how it works, where it fails, and what it takes to make it reliable. I’m also a licensed Professional Engineer and a certified Project Management Professional, which means I bring engineering discipline and systems thinking to problems that many approach from either a pure technology perspective or a pure policy perspective. That combination matters more now than it ever has. In a world where anyone can generate a “solution” but nobody can inspect the internals, the question shifts from “does it work?” to “is it safe, reliable, and fit for purpose?” That is an engineering question, not a programming question. It requires the same rigour we apply to bridges, medical devices, and process plants — systems where failure has consequences. This is why I serve on ISO’s ESG working group, sit on OSPE’s AI in Engineering committee, and chair the AI Committee for Engineers for the Profession, where we’re advocating for professional engineering standards in digital disciplines across Canada. The licensing and governance structures that protect the public in traditional engineering need to extend into the digital domain — and that need is becoming urgent. The Energy and Economics Question There’s another dimension to this reset that doesn’t get enough attention: the economics. Traditional software follows a “build once, deploy many” model. You invest in development, and that investment scales across users and deployments. The marginal cost of serving one more customer is relatively low. This is the economic engine that made SaaS so attractive — and so profitable for vendors, even as it became less valuable for customers. Bespoke, AI-generated solutions invert that model. Every solution consumes compute every time it’s generated. There is no “build once” efficiency. The economics shift from capital expenditure on software development to continuous operational expenditure on AI generation and execution. The question I posed earlier — are your AI systems aligned to your goals, or just running and consuming power and money? — isn’t rhetorical. In this emerging model, it becomes the central business question. Without operational visibility into what your AI systems are actually doing, what they’re costing, and what value they’re returning, you’re flying blind in an increasingly expensive sky. Cybernetics Comes Full Circle For those who know my work, you’ll recognize the influence of W. Ross Ashby and the principles of cybernetics in how I approach governance. Ashby’s Law of Requisite Variety tells us that a system’s regulator must have at least as much variety as the system it governs. Simple rules cannot govern complex systems. As AI systems become more complex, more dynamic, and more opaque, the governance mechanisms must match that complexity. You govern through constraints, feedback loops, and measured outcomes — not through reading source code or ticking compliance checklists. The cybernetic approach to governance, which might have seemed theoretical a few years ago, is becoming the practical necessity. And in the context of the reset, cybernetics offers something else: a way to prevent the next round of enshittification before it starts. If governance is built into the operational fabric from the beginning — if feedback loops and accountability mechanisms are structural, not afterthoughts — then the extraction playbook becomes much harder to run. This is operational governance. This is what I do. * Building It Right This Time The reset is real. Enshittification broke the trust. The collapse of the abstraction stack is providing the escape route. AI is rewriting the rules of how software is built, deployed, and consumed. But a reset is not a guarantee of something better. It’s an opportunity. And opportunities are only as good as the discipline we bring to them. The businesses that thrive in this new environment will be the ones that don’t just adopt AI, but govern it. That build operational visibility into their AI systems. That treat compliance not as a checkbox exercise but as a living discipline of keeping promises to the people who depend on them. That demand accountability from their AI infrastructure the same way they demand it from their physical infrastructure. If you’re already asking the hard questions about your AI systems — about alignment, about spend, about viability, about what reengineering is needed to compete — then you’re ahead of most. And I’d welcome the conversation. We have a rare chance to build something better. Let’s not waste it by repeating the same mistakes at a deeper layer. The window between “we should figure this out” and “we should have figured this out” is closing. Let’s not wait.   About the Author Raimund (Ray) Laqua, P.Eng., PMP, is a computer engineer and the founder of Lean Compliance Consulting and co-founder of ProfessionalEngineers.AI . With over 30 years of experience across highly regulated industries, Ray specializes in operational AI governance and compliance. He serves on ISO’s ESG working group, OSPE’s AI in Engineering committee, and chairs the AI Committee for Engineers for the Profession (E4P), advocating for professional engineering standards in digital disciplines across Canada.

  • Compliance and the Problem of Evil

    Raimund Laqua, P.Eng., PMP When we speak of safety failures, quality defects, security breaches, or sustainability shortfalls, we are always speaking of absences. Something that should have been present was not. A capability that ought to have existed was missing. A promise that was made went unkept. But an absence only makes sense in relation to a presence. You cannot miss what was never defined. You cannot fall short of a standard that was never articulated. And here lies the fundamental error at the heart of most compliance frameworks: they begin with what has gone wrong and attempt to work backwards to what should be. This gets the order of reality exactly backwards. Two Kinds of Absence Not all absences are equal. Negation  is simple logical denial — the contradictory of a thing. A product meets quality standards or it does not. A workplace is safe or not-safe. This binary framing is clean, auditable, and almost entirely useless for building real capability. Privation  is richer. It is the absence of something that ought to be present  given the nature and purpose of the thing in question. A bridge that cannot bear its rated load does not merely "lack safety" in some abstract logical sense — it is deprived  of a quality proper to its function as a bridge. The privation tells us not only that something is wrong, but what is missing and why it matters. Both negation and privation are real and consequential. But here is the crucial point: neither is intelligible without first defining the positive reality from which they depart. You cannot know what is unsafe  without first defining what safe  means. You cannot identify a quality defect  without first defining what quality is  for this product, in this context. You cannot declare a security breach  without first establishing what a secured state looks like. The negative has no content of its own — it borrows all of its meaning from the positive it denies or falls short of. Define the good, and the nature of its absence becomes clear. Skip that step, and you are left cataloguing symptoms with no diagnosis. The Problem of Evil This is, in essence, the ancient question of good and evil restated in operational terms. The word evil  may seem out of place in business discourse — we prefer the antiseptic language of "risk events," "non-conformances," and "control failures." But if that language makes us comfortable while people are harmed by the absence of what ought to have been present, then the comfortable language is part of the problem. The moral structure does not change because we have found softer words for it. In the classical tradition, evil is not a thing in itself — it is the privation of good. Blindness is not a substance; it is the absence of sight in a being that ought to see. Cruelty is not a positive force; it is the absence of the justice and compassion that ought to govern human action. Evil is parasitic on good. It can only be understood — can only exist  — as a deficiency in something that should otherwise be whole. The question is not whether organisations that fail at safety are evil. The question is whether the structure of that failure — the absence of a good that ought to be present — is any different from what the classical tradition calls evil. If the structure is the same, perhaps the moral weight deserves more attention than we have been giving it. The Hard Problem of Positive Definition This logic applies across every compliance domain — quality, safety, security, sustainability, ethics, AI safety. But the moment we try to apply it, we encounter a discomforting discovery: the positive definitions do not exist.  Not in any rigorous sense. What we have instead are glossary entries that are themselves negations dressed up as definitions. Consider safety. ISO 45001 defines it as "freedom from unacceptable risk." That is a negation — safety is defined as the absence of something else. But what is safety positively ? What is present when safety is present? The instinct is to reach for mechanisms: controls, redundancies, protective barriers, safe behaviours. But these are means  by which safety is achieved or maintained, not safety itself. A beam bears its load. It is whole, doing what it was made to do. It is safe — not because something was added to it, but because safety is what it is  when it is intact. A worker stands on solid ground. No harness, no procedure, no signage. She is safe — not because of controls, but because there is nothing she has been deprived of. Safety is the default condition from which danger is the departure. The Latin salvus  — from which we derive "safe," "salvation," and "salvage" — means whole, intact, uninjured. Safety, at its root, is wholeness : the condition of a thing being as it ought to be, undiminished, undamaged, complete in its nature and purpose. A bridge is safe when it is whole — when it possesses the structural integrity proper to a bridge. A person is safe when they are whole — unharmed, unthreatened, able to be what they are. Safety is not something added on top. It is the baseline condition of things being as they should be. Even here, wholeness must be partly described by what it is not — undiminished, undamaged, uninjured. The positive and the negative are genuinely intertwined. But the wholeness comes first. We only know what "undamaged" means because we already know what the intact thing looks like. And there is a further difficulty. To define safety as wholeness, we had to invoke "as it ought to be" — which demands a prior understanding of a thing's nature and purpose. We are doing philosophy whether we intended to or not. This is genuinely hard. And the same difficulty awaits every domain: What is   quality ? Not the absence of defects — but what is present when quality exists? Is it conformance to purpose, excellence of execution, coherence of design? What is   security ? Not the absence of breaches — but what exists when something is truly secure? Is it trustworthiness, integrity of boundaries, inviolability of what has been entrusted? What is   sustainability ? Not the avoidance of depletion — but what is present when an operation is sustainable? Is it stewardship, regenerative capacity, fidelity to obligations that extend beyond the present? What is   ethics ? Not the avoidance of wrongdoing — but what is present when action is ethical? Is it integrity, justice, care, accountability? What is   AI safety ? Not the absence of misalignment or harm — but what is present when artificial intelligence is safe? Is it alignment, transparency, bounded purpose, controllability? And who defines these qualities — and on what grounds? These are not rhetorical questions. They are the questions that every compliance framework implicitly answers but rarely confronts. And the difficulty of answering them does not excuse the failure to ask. Without a positive definition — however hard-won — negation tells us nothing and privation has no reference point. We are left managing the absence of things we have never defined. The Practical Consequence When organisations begin with hazard registers, threat models, risk matrices, and failure modes, they are starting with evil and trying to infer good. The result is compliance that is inherently reactive — catalogues of bad things that might happen, with no coherent vision of the good state they are trying to achieve or sustain. This leads to familiar pathologies: risk registers that grow without limit because there is no defined "enough"; controls that address symptoms rather than root capabilities; audit regimes that verify the presence of paperwork rather than the presence of capability; and a pervasive sense that compliance is burden rather than benefit. The corrective is simple in principle, though demanding in practice: define the positive first.  Once defined, negation and privation become powerful diagnostic tools. Negation gives you the binary check: is the quality present or not? Privation gives you the gap analysis: what specific qualities are missing, relative to what should be there? Compliance as the Pursuit of the Good If the argument of this piece holds, then compliance is not fundamentally about avoiding bad outcomes. It is about defining and pursuing good ones — about doing the hard work of establishing what quality, safety, security, sustainability, ethics, and AI safety actually are  before attempting to manage their absence. The positive definitions resist easy formulation. They demand engagement with purpose, nature, obligation — questions that most compliance frameworks are not designed to ask. But the difficulty does not change the logical order. The good is still prior to its privation. Wholeness is still prior to damage. The intact thing is still prior to the defect. The promise an organisation makes — whether to regulators, to customers, to the public, or to future generations — is not "we will avoid harm." It is "we will be this ." We will possess these qualities. We will sustain these commitments. We will deliver these outcomes. Harm avoidance follows from that positive commitment. It is a consequence, not a substitute. This is why compliance, properly understood, is not overhead. It is the operational pursuit of the good — the ongoing work of defining what wholeness looks like for this  organisation, in this  context, and then building and sustaining the qualities to achieve it. When that work is neglected, what follows is not merely a regulatory gap. It is a privation — the absence of something that ought to have been present. And the moral weight of that absence does not diminish because we have learned to call it something else. Evil is the privation of good. Risk is the privation of certainty. Non-compliance is the privation of commitment. In every case, you must define what something is before you know what is missing. Raimund Laqua is the founder of Lean Compliance Consulting and co-founder of ProfessionalEngineers.AI . His work focuses on transforming compliance from procedural overhead into operational capability through the principles of Promise Theory and cybernetic governance.

  • The Foundations of Lean Compliance

    Lean Compliance rests on foundational principles drawn from promise theory, cybernetic regulation, and value chain analysis. This article presents the logical progression that connects these principles and demonstrates why they necessarily lead to a different understanding of compliance itself. Understanding Obligations and Promises Promise Theory & Operational Compliance Compliance is fundamentally about meeting obligations. For compliance to be successful, these obligations must be operationalized through the fulfillment of promises associated with each obligation. This connection is grounded in Promise Theory, which recognizes that organizations make voluntary commitments to maintain cooperative relationships with their stakeholders. Regulatory obligations come in four distinct types based on what they require and at what level they operate. The four types determine whether compliance requires procedural adherence (means) or outcome achievement (ends), at either specific (micro) or systemic (macro) levels. To meet these obligations, organizations must develop operational capabilities to fulfill their commitments—to keep their promises. Compliance as Regulation Compliance fulfills promises through regulation—regulating organizational effort to achieve targeted outcomes. This includes static controls, but more importantly, dynamic cybernetic systems that adapt and respond through feedback and feedforward controls. The Foundation: Lean Thinking Lean is about creating value by eliminating waste in operations. Waste is the manifestation of risk that has become reality. The root cause of both waste and risk is uncertainty, which lean practitioners call variation or variability. The Core Insight: Regulation Reduces Variation The act of regulation—through feedback and feedforward controls—reduces variation and variability. This is the fundamental principle underlying both Lean Six Sigma in operations and compliance functions like quality management and safety programs. Both regulate processes to reduce uncertainty. Expanding Value: From Shareholder to Stakeholder Expanded Value Chain Analysis (VCA) Traditional Value Chain Analysis measures value as financial margin, optimized for shareholder value. Michael Porter developed VCA as a tool for achieving competitive advantage through superior margin creation. However, modern organizations must create value for all stakeholders: customers, employees, communities, regulators, the environment, and shareholders. This expansion requires redefining value beyond margin to include quality, safety, security, sustainability, ethics, and trust. These aren't optional extras—they are obligations and promises to stakeholders. By extending VCA to encompass these dimensions, we create a more comprehensive model that affords management better decision-making tools for achieving competitive advantage in today's stakeholder-driven environment. From Productivity to Total Value Value Chain Analysis reveals secondary activities designed to improve productivity—the traditional domain of lean and operational excellence, focused on margin creation. Achieving Total Value requires more. Total Value includes financial margin plus quality, safety, security, sustainability, and ethics, plus value as perceived through the eyes of all stakeholders. This requires activities that improve certainty rather than just productivity. This is the domain of certainty programs and the practice of Lean Compliance. The Integrative Force: Certainty + Productivity Programs Productivity programs use regulation to reduce variation and improve margins. Certainty programs use regulation to reduce uncertainty and ensure Total Value is created. Together, these programs serve as an integrative force within the value chain, ensuring both shareholder and stakeholder value. Conclusion Value and Compliance Streams Lean Compliance is not compliance adapted to lean thinking. It is the natural extension of lean principles into the domain of certainty—making visible what has always been implicit in Total Value creation. These foundational principles enable practical applications including compliance streams, operational compliance models, and cybernetic governance systems that transform external obligations into internal operational capability. This is Lean Compliance.

  • Taking Ownership: The First Step to Operational Compliance

    For decades, compliance has been one of the most reactive functions in the enterprise—more reactive than finance, operations, or even IT. While there are reasons why this is the case, this excessive reactivity has created a mission-critical gap: a dangerous vacuum where managerial accountability should exist but has been replaced with busywork. The Abdication Problem Managers, for the most part, have quietly abdicated their compliance responsibilities. They've handed them off to third-party consultants, delegated them to understaffed compliance departments, or worst of all, outsourced their thinking entirely to external auditors. When audit findings arrive (although not the only measure of effectiveness), these same managers treat them as someone else's problem to fix rather than their failure to prevent. This abdication means obligations go unowned. And unowned obligations don't get fulfilled—they get tracked, reported on, and documented, but not actually fulfilled. The organization drifts outside the lines, remains blind to emerging risks, and loses sight of its mission while everyone points to procedures that nobody truly owns. Why "Be Proactive" Doesn't Work The obvious answer seems to be: stop being reactive and start being proactive. Get ahead of issues. Anticipate problems. Be forward-thinking. If only it were that simple. Telling a reactive organization to become proactive is like telling someone who can't swim to simply start swimming better. The problem isn't their technique—it's that they haven't learned to stay afloat. You cannot be genuinely proactive about obligations you don't actually own. Ownership Comes First The path forward begins with a foundational shift: organizations must take ownership of their obligations and the risks those obligations address. Not delegated ownership. Not documented ownership. Real ownership—where specific people accept responsibility for ensuring specific promises are kept and specific hazards are controlled. This means: Managers understanding their obligations as personal commitments, not corporate procedures Leaders recognizing that compliance risk is operational risk, not a separate concern Executives accepting that audit findings represent their management failures, not their auditors' discoveries What AI Cannot Do And if you thought AI can help you with this, you will be left wanting. Here's the thing: AI cannot take ownership of your obligations. It can't even take ownership of its own outputs. AI might be able to analyze some of your compliance gaps, generate your procedures, monitor your controls, and flag your risks—assuming you even have a complete set of those. It can make compliance activities faster, cheaper, and more efficient. But it cannot look your stakeholders in the eye and promise them anything. It cannot accept accountability when things go wrong. It cannot decide what matters and what doesn't. Ownership is an irreducibly human act. It requires judgment, commitment, and the willingness to be held responsible. These aren't features that can be automated or algorithmic capabilities that can be trained. They're moral choices that only people can make. Organizations rushing to deploy AI for compliance are often doing so precisely to avoid ownership—creating yet another layer of delegation, another place to deflect accountability. "The system didn't flag it" becomes the new "the auditor didn't catch it." Until Ownership, Nothing Changes Without this ownership foundation, compliance will remain exactly as it is: reactive, fragmented, and procedural. It won't improve. It won't integrate into operations. It won't create value. Organizations will continue generating documentation that nobody reads, attending training nobody remembers, and responding to findings nobody prevents. They'll add AI tools to the stack, automate the busywork, and still fail to keep their promises because nobody has actually accepted responsibility for keeping them. The transformation to operational compliance—where obligations become capabilities and compliance creates value—cannot begin until someone looks at the organization's promises and risks and says: "These are mine. I own them." Everything else follows from that moment. Nothing meaningful happens before it. And no technology, no matter how intelligent, can say those words for you.

  • Compliance 2.0 System Requirements

    For years, I've been tracking the evolution of compliance technology—and I've noticed a persistent gap between what organizations need and what the market delivers. Many, and perhaps most, compliance systems are designed around a basic understanding: they treat compliance as a documentation problem, or at most a data problem, rather than an operational problem. This made sense when compliance was only about legal adherence, where the goal was to provide evidence of compliance to regulatory requirements. However, compliance is no longer just about that, and hasn't been for decades, particularly in highly regulated, high-risk sectors. Compliance does not mean just passing an audit or obtaining a certificate. Compliance is about meeting obligations across many domains, including safety, security, sustainability, quality, ethics, regulatory, and other areas of risk. This requires contending with uncertainty and keeping organizations on-mission, between the lines, and ahead of risk. It's about making certain that value is created and protected. We have called this Compliance 2.0, although each domain has its own name for it: Total Quality Management, Safety II, HOP, Functional & Process Safety, Cybernetics, Lean, and others. It's all about reducing variability to make certain that value is created rather than waste, which is what you get when risk becomes a reality. Compliance 2.0 requires operational capabilities to achieve targets and advance outcomes towards better safety, security, sustainability, quality, ethics, legal adherence, and other expectations.  Some may argue about the particulars, but overall most agree that this is the purpose of compliance (not the department) implemented as programs led by director and officer-level managerial roles. Compliance 2.0 programs are built on systems and processes that implement and deliver on promises associated with both mandatory and voluntary obligations. The essential (not the basic) capacity to deliver is what we call, Minimum Viable Compliance (MVC). The problem is that while compliance has changed, most technology and practitioners have not kept up. Traditional methods and practices based on inspection and audits are firmly entrenched and are difficult to change. This is why I created Lean Compliance close to 10 years ago—to bridge this gap. To help organizations evaluate their compliance systems, I've created a list of system requirements for what is needed to support an operational view of compliance. This is not complete, and more work needs to be done. However, it's a start that I hope you might find helpful. Requirements for Compliance 2.0 Systems  Managing Operations, Not Just Documents: Manage ALL four types of obligations—prescriptive rules, practice standards, performance targets, and program outcomes Trace promises to the operational capabilities required to fulfill them Track promise-makers and promise-keepers—who commits versus who delivers (RACI Model) Maintain the golden thread of assurance from obligation through to operational delivery Establish provenance—knowing where obligations come from and how they flow through operations Align stated values with how work actually gets done Integrate cross-functionally, breaking down silos between compliance, operations, and quality Real Intelligence, Not Just Documentation: Monitor compliance status AND operational capacity to maintain it in real-time Distinguish between operational risk (failure to keep promises) and compliance risk (failure to deliver value) Surface operational insights before issues become incidents Establish cybernetic feedback loops between operational reality and compliance commitments Enable self-regulating mechanisms that maintain compliance through operational design Advance capabilities that drive better outcomes across safety, security, quality, sustainability, and ethics Provide balanced scorecard/dashboard across the hierarchy: outcomes (results) → performance (capacity) → conformance (practices) → adherence (rules) Forward-Looking Operations:  Enable management pre-view instead of only management review Plan front-view capabilities instead of reporting rear-view activities Conduct pre-incident investigations and program pre-mortems—not just post-mortems Assess organizational capability to fulfill obligations (close the "compliance effectiveness gap") Improve continuously across conformance, performance, effectiveness, and assurance Provide end-to-end visibility from obligation to operational outcome Built-In, Not Bolted-On: Integrate compliance requirements into operational design from the start Build in compliance (poka-yoke principles) rather than inspect for it Make obligation alignment immediately visible through operational transparency Generate evidence as operational by-product, not separate activity Identify and eliminate compliance waste—redundant controls and non-value-adding activities What's Next? If you want to learn more about Compliance 2.0, I invite you to sign up for our upcoming, L ean Compliance Leadership Workshop : How to Lead Compliance 2.0 Transformation ( Feb 11 ). Raimund Laqua (Ray) is a Professional Engineer (P.Eng.) and Project Management Professional (PMP) with over 30 years of experience in highly regulated industries including oil & gas, medical devices, pharmaceuticals, financial services, and government sectors. He is the founder of Lean Compliance Consulting and co-founder of  ProfessionalEngineers.AI . Ray serves on ISO's ESG working group, OSPE's AI in Engineering committee, and as AI Chair for Engineers for the Profession (E4P), where he advocates for federal licensing of digital engineering disciplines in Canada.

  • Is This The Best GRC Has To Offer?

    I just attended a webinar from a leading GRC vendor promoting continuous risk assessment for AI. The topic seemed timely and the solution promising, so I gave it my full attention. What I heard : AI introduces significant risk across organizations and within every functional silo. Fair enough. ⚡ The pitch: With all this risk, you need a system to manage it comprehensively. OK. What they demonstrated was little more than a risk register combined with task management—where tasks are defined as regulatory requirements, framework objectives, and controls tagged with risk scores. The only novel feature was hierarchical task representation. Everything else was standard fare, complete with the obligatory heat map. ⚡  Not Understanding AI Risk Risk was presented as the typical likelihood x severity calculation. They tried to present risk aggregation, but here's the issue: you can't simply add up risks and average them. Risk is stochastic. Proper aggregation requires techniques like Monte Carlo simulation across probability density functions for each risk. It's even better when you understand how risk-connected elements interact, enabling evaluation of risk propagation through the system. The bottom line : This was traditional (and basic) risk management applied to AI—and done poorly.  The promise of continuous risk assessment tied to AI was not delivered. ⚡  What AI Risk Actually Requires If this represents the best that GRC can offer for AI, we're in deep trouble. With infinite possible inputs and outputs, generative AI is better described as an organizational hazard rather than a foundation for stable, predictable performance. We need: Real-time controls, monitoring, and assessments Managed risk,  not just bigger risk management databases And we need all of this to be operational. ⚡  Learning From Other Risk Domains Perhaps we should adopt risk measures and methods from high-hazard sectors: Hazard isolation HAZOP studies Functional and process safety approaches STAMP/STPA/CAST analysis Cybernetic regulation And others Regardless of methodology,  we need advanced software engineered for adaptive real-time systems —not yesterday's tools repackaged. The alternative?  What many companies are doing now: buying bigger databases to track all the new risks they've created by deploying AI. We can—and must—do better. If you're looking to effectively contend with AI risk within your organization—beyond heat maps and risk registers—let's talk. I work with organizations to build operational approaches that actually manage hazards in real time, not just document them.

  • Why GRC Should be GRE

    What GRC Should BE Traditionally, GRC activities were centered around integrating the siloed functions of Governance , Risk , and Compliance (GRC). While this is necessary, it is based on an old model where meeting obligations (the act of compliance) is a checkbox activity reinforced by audits. Similarly, risk management was building risk registers and heat maps, and governance was providing oversight of objectives completed in the past. All this to say: This was all reactive, misaligned, and focused on activity not outcomes. However, when you start with an integrative, holistic, and proactive approach to meeting obligations, a different model emerges where the bywords are: Govern , Regulate , and Ensure (GRE). These are essential capabilities that, when working together, improve the probability of success by governing, regulating, and ensuring the ends and the means in the presence of uncertainty. There is no need to integrate disparate functions, as these are already present in their proactive, integrative, and holistic form to deliver the outcome of mission success. If you're interested in learning more about transforming reactive GRC functions into proactive GRE capabilities, explore T he Total Value Advantage Program™

  • Regulating the Unregulatable: Applying Cybernetic Principles to AI Governance

    As artificial intelligence systems reshape entire industries and societal structures, we face an unprecedented regulatory challenge: how do you effectively govern systems that often exceed human comprehension in their complexity and decision-making processes? Traditional compliance frameworks, designed for predictable industrial processes and human-operated systems, are proving inadequate for the dynamic, emergent behaviors of modern AI. The rapid proliferation of AI across critical sectors—from healthcare diagnostics to financial trading, autonomous vehicles to criminal justice algorithms—demands a fundamental rethinking of how we approach regulatory design. Yet most current AI governance efforts remain trapped in conventional compliance paradigms: reactive rule-making, checklist-driven assessments, and oversight mechanisms that struggle to keep pace with technological innovation. This regulatory lag isn't merely a matter of bureaucratic inertia. It reflects a deeper challenge rooted in the nature of AI systems themselves. Unlike traditional engineered systems with predictable inputs and outputs, AI systems exhibit emergent properties, adapt through learning, and often operate through decision pathways that remain opaque even to their creators. The answer lies in applying cybernetic principles—the science of governance and control—to create regulatory frameworks that can match the complexity and adaptability of the systems they oversee. By understanding regulation as a cybernetic function requiring sufficient variety, accurate modeling, and ethical accountability, we can design AI governance systems that are both effective and ethical. The stakes couldn't be higher. Without deliberately designing ethical requirements into our AI regulatory systems, we risk creating governance frameworks that optimize for efficiency, innovation, or economic advantage while systematically eroding the safety, fairness, and human values we seek to protect. What regulatory approaches have you seen that effectively address AI's unique challenges? Ray Laqua, P.Eng., PMP, is Chair of the AI Committee for Engineers for the Profession (E4P), Co-founder of  ProfessionalEngineers.AI , and Founder of Lean Compliance.

  • Ethical Compliance

    Technology is advancing faster and further than our ability to keep up with the ethical implications. This applies also to the systems using them that: govern, manage, and operate the businesses we work for and this includes compliance.   The speed of technological change poses significant challenges for compliance and its function to regulate activities of an organization to stay within (or meet) all its regulatory requirements and voluntary obligations. Whether you consider compliance in terms of safety, quality, or professional conduct, these are all closely intertwined with ethics which are rooted in values, moral attitudes, uncertainty and ultimately decisions between what is right and wrong. "It is impossible to design a system so perfect that no one needs to be good." – T.S. Eliot Ethical Compliance In this article I explore what makes a compliance system good (or effective) and secondly, and more importantly, can it be made to be ethical assuming that's what you want for your organization. To answer these questions, we will dive into the topic of cybernetics and specifically the works of Roger C. Conant and W. Ross Ashby along with the more recent works by Mick Ashby. To start, we need to define what cybernetics is and why it is important to this discussion. What is Cybernetics? Cybernetics is derived from the Geek word for "governance" or "to steer." Although this word may not be familiar to many, cybernetics is an active field of science involving a "nondisciplinary approach to exploring regulatory systems – their structures, constraints, and possibilities." This is where we derive much of our understanding of system dynamics, feedback, and control theory that we use to control mechanical and electrical systems. However, cybernetics extends far beyond engineering to: biology, computer science, management, psychology, sociology, and other areas. At the basic level governance has three components: (1) the system that we wish to steer, (2) the governor (or regulator) which is the part that does the steering, and (3) the controller, the part that decides where to go. The following diagram illustrates the basic functions of a system under regulation. In this example, we have an HVAC system used to maintain a constant temperature in a house: A thermostat regulates the heating and conditioning sub systems which are controlled by the owner. It is important to understand the difference between the controller and regulator roles. The thermostat cannot tell if it is too hot or too cold, it only knows the number for the temperature. It is the owner (acting as the controller) that must decide whether the temperature is comfortable or not. This distinction is useful to better understand how companies need to be regulated. Regulatory bodies create regulations, however, it is each organization's responsibility to control and perform the function of regulation not the regulatory body. In a sense, each company must decide on the degree by which each compliance commitment is met (i.e. is it too high, is it too low, or is it just right) according to the level of uncertainty. What is a Good Regulator? To govern, you need a way of steering, and that is the role of the regulator. A regulator adjusts the system under regulation so that its output states are within the allowable (or desirable) outcomes. The Good Regulatory Theorem posited by Conant and Ashby states that "Every Good Regulator of a System Must be a Model of that System." Examples of models that we are more familiar with include: a city map which is a model of the actual city streets, a restaurant menu which is a model of the food that the restaurant prepares, a job description which is a model of an employee's roles and responsibilities, and so on. In more technical terms the model of the system and the regulator must be isomorphic. The theorem does not state how accurate the model needs to be or the technical characteristics. Sometimes a simple list of directions can be more helpful than a detailed map where there is too much information. The theorem is sufficiently general and is applicable to all regulating, self-regulating and homeostatic systems. What is necessary is sufficient understanding of how the system works to properly know how to regulate it. A critical characteristic to know is how much variety (or variation) exists in the output of the system under regulation. The Law of Requisite Variety The Law of Requisite Variety (posited by W. Ross Ashby) states that for a system to be stable, the number of states of its regulator mechanism must be greater than or equal to the number of states in the system being controlled. In other words, variety destroys variety which is what regulation does. This law has significant implications when it comes to systems in general but also to management systems. For example, according to the law of requisite variety, a manager needs as many options as there are different disturbances (or variation) in the systems he is managing. In addition, when systems are not able to meet compliance (for example), it may be due to a lack of sufficient variety in the controls systems. This may help explain why existing controls may not be as effective as we would like. There needs to be enough variation in the control actions to adjust the management system and stay within compliance be it performance, safety, quality, or otherwise. What is an Ethical Regulator? Now, that we have a sense of what regulation does and what is needed for it to work, we will consider what it means for the regulation function to be ethical. First and foremost, we need to explain what it means to be ethical. By definition, something that is ethical is (1) related to ethics (ethical theories), (2) involved or expresses moral approval or disapproval (ethical judgments), or (3) conforms to accepted standards of conduct (ethical behavior). According to Mick Ashby, a regulator could be considered ethical if meets nine requisite characteristics (six of which are only necessary for the regulator to be effective). An ethical regulator must have: Truth about the past and present. Variety of possible actions (greater than or equal to the number of states of the system under regulation) Predictability of the future effects of actions. Purpose expressed as unambiguously prioritized goals. Ethics expressed as unambiguously prioritized rules. Intelligence to choose the best actions. Influence on the system being regulated. Integrity of all subsystems. Transparency of ethical behaviour (this includes retrospectively) The challenges to build such a system are many. However, there are three characteristics (indicated in bold ) that are requisites for a regulator to be ethical. Interestingly, these are the areas where we have the greatest hurdles to overcome: It is not yet possible to build ethical subroutines where goals are unambiguously prioritized Transparency of ethical behaviour is not possible when the rules are not visible or cannot be discovered. This is very much the case with current advances in machine learning and artificial intelligence systems were we don't even know what the rules are or how they work. Systems do not have sufficient integrity to protect against tampering along with other ways they can be manipulated to produce undesired outcomes. We can conclude that current limitations prohibit building systems that incorporate the necessary characteristics for the regulation function to be ethical as measured against the ethical regulator theorem. Before we look at how these limitations can be addressed, there is another law that is important to understand for companies to have systems that are ethical. The Law of Inevitable Ethical Inadequacy This law is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system will always optimize away from "quality", "safety", or environmental" goals. This may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the production system will regulate towards that goal at the expense of all others. Perhaps this provides a form of proof that compliance cannot be a separate objective that is overlaid on top of production systems and processes. We know that quality must be designed in and we can conclude that this is also applies to all compliance goals. Definition of Ethical Compliance As previously mentioned, cybernetics is a governance function that at a basic level includes: the system under regulation, the regulator, and the controller. We also stated that compliance performs the role of regulation to steer a system towards meeting compliance obligations. When these obligations incorporate such things as quality, safety, and professional conduct, we are adding an ethical dimension to the compliance function. Based on the laws of cybernetics along with the limitations previously discussed, we can now define "Ethical Compliance" as: Ethical Compliance = Ethical System + Ethical Controller + Effective Regulator The system under regulation must be ethical (i.e. must incorporate quality, safety, and other compliance goals.) – Law of Inevitable Ethical Inadequacy The regulator must be a good regulator (i.e. must be a model of the system under regulation) – Good Regulator Theorem The regulator must be effective (i.e. it must at least meet the 6 characteristics of the ethical regulator that make it effective) – Ethical Regulator Theorem The controller must be human and ethical (as the regulator cannot be) – Ethical Regulator Theorem The controller must be human and accountable (i.e. transparent, answerable, and with integrity) – Ethical Regulator Theorem, and Regulatory Statutes and Law The last one is ultimately what makes compliance ethical and more than just codified values and controls. Taking responsibility and answering for our decisions is imperative for any ethical system. Machines are not accountable nor do they take responsibility for what they do. However, this is what humans do and must continue to do. References: 1. Ethical Regulators - http://ashby.de/Ethical%20Regulators.pdf 2. Good Regulators - http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf 3. Law of Requisite Variety - http://pespmc1.vub.ac.be/REQVAR.html 4. Requisite Organization and Requisite Variety, Christopher Lambert - https://vimeo.com/76660223

bottom of page