top of page

SEARCH

Find what you need

585 results found with an empty search

  • Demo-first Approach to Selecting Compliance Software

    When it comes to selecting commercial-off-the-shelf (COTS) compliance software there was a time when this involved a structured process based on a requirements-first approach. This has now been largely replaced with a demo-first approach encouraged by cloud vendors as well as by the buyers themselves. Instead of a bake-off compared against requirements, software is now chosen based on how well the software demos and looks. Does this approach result in better outcomes? Let's find out. Requirements-first Approach: A requirements-first approach typically includes the following steps: Request for Information (RFI) – survey of the market to identify candidate vendors and solutions Request for Proposal (RFP) – request for written responses to requirements from identified long list of candidate vendors. Short Listing – a short list is created based on the selection criteria rubric Request for Quotation (RFQ) – obtain firm and final pricing from the short listed vendors Live Test Demonstration (LTD) – make sure the short list of vendors actually meet the stated requirements by following a scripted walk-through. Select Apparent Winning Offeror – selection of the best alternative based on vendor performance, fit for purpose, and technical requirements. Pilot System - validation that the solution can achieve the intended outcomes as well as the verified technical requirements. The purpose of following these steps is to manage risk inherent in selecting a solution that best fits the scope, budget, and requirements. In addition, it also creates level of playing field, keeping everyone honest on both sides of the table. The following data is a compilation across 20 projects that followed a requirements-first approach: * Waterfall = Gated, Structured Approach * Hybrid = Gated, Agile Approach Key lessons learned from these projects include: System scope was the major influence in determining overall procurement cycle time. However, there is only an incremental increase (8 versus 12 months) when considering departmental versus platform solutions. The overall duration was largely determined by vendor and buyer schedules Waterfall approach using approval gates was preferred the larger the project scope In addition, 90% of projects did not purchase their first choice for reasons that included: Failed live test data (LTD) – RFP responses was good but based on software that was not yet available or didn't withstand scrutiny of actual use. Failed pilot system due to poorly understood or specified requirements Requirements changed during the procurement process It is worthwhile stating that each project completed successfully even though it was not with the first choice of vendor. Having a second choice proved to be a significant factor when mitigating the uncertainties experienced during the procurement process. Demo-first Approach: These days it seems that many companies jump right to requesting a demonstration of software without first understanding what it is that they need. While this may prove successful for some applications, when it comes to critical compliance solutions at the scale of the enterprise this can lead to decisions that are less than optimal and waste valuable time, resources, and possibly exposing companies to unnecessary risk. Companies who have used the demo-first approach have noted that these projects tend to produce the following: Scope creep – everyone wants all the capabilities that they see demonstrated Difficulty in making an apples to apples comparison of the alternatives Cost overruns due to unplanned integration, customization, and data migration Schedule overruns leading to late ROI and in many cases unrealized benefits Solutions that only meets rudimentary requirements and not capable of meeting the full demands of the organization Loss of data and information due to insufficient planning and resourcing for data cleansing and migration activities In addition, projects still end up taking the same amount of time to procure a solution as with a requirements-first approach. Although, in the case of a demo-first approach they tend not to follow a risk-based process. This makes them vulnerable to uncertainties that an RFP, LTD, and Pilot steps would have discovered. Companies have also noticed an increased tendency to choose software that may have: demoed the best, had the most capabilities, had the lowest initial cost, or the one that was used at the last company that someone worked at. In other words, without a set of requirements there was no basis on which to make an effective comparison based on actual and anticipated need. It would be reasonable to ask why companies would choose a less rigorous process for selecting compliance solutions. Here are some of the reasons given: Our current system doesn't work and we need something else but we don't know what that looks like I don't know what I need so looking at software helps me figure that out All I want is something that is user friendly. I expect the vendor to know what my requirements are. This is off-the-shelf-software so why do I need to write down any requirements. Don't they all do the same thing. I am just looking to replace what I currently have so those are my requirements We are looking at cloud-based software and the subscription costs don't warrant a large project Our business analysts used to do that but we don't have those roles anymore I don't have the time to go through a structured process. We are following an agile approach which means we don't need to figure out what are requirements are right now Even if it the software doesn't work we can replace it easily because its all in the cloud As more organizations move their systems over to the cloud it is expected that the use of a demo-first approach will increase. Of course each company will have different levels of success, however, the probability of success can still be improved by effectively managing uncertainties specifically with respect to scope. Risk-Based Approach: Acquiring software to support critical compliance processes still requires that risks be properly addressed. The most significant source of risk hasn't changed and is still scope creep or scope gallop as it often the case. Managing scope is essential to every project and this applies to choosing compliance software. Software demonstrations can be an effective way to learn about what is available in the marketplace. This in many ways has replaced the use of RFIs. However, demos do not replace the need to specify what the software needs to do or the need to manage risk. Requirements may not be as detailed as they once were and may take the form such as user stories. At the same time, they still must be sufficient to cover what the software contractually needs to deliver and how it needs to perform in order to achieved the desired outcomes. It is always good to remember that you are not the product the software is. In addition, as previously noted, it is a good strategy to always have a second choice because your first choice is likely not the one that will achieve the desired outcomes. Whether you follow a demo-first or requirements-first approach or not you still need to get answers to the same set of questions. The timing of when you get these answers will significantly influence the success of your project. If you wait until after you purchase the software you will need to deal with the effects of not knowing or what is called, "epistemic uncertainty." The risk of not knowing can and often leads to failed projects that in many cases doubles the cost since the project has to be done over again. Here is list of items that some companies chose not to know in advance: The importance of integration with other systems and consequently neglected during the procurement phase The value associated with legacy data leading to no budget for data migration The loss of control over how processes are implemented resulting in the using vendor generic workflows The impact of using generic approaches that were sub-standard to the company's higher standards The lack of understanding of how an on-demand pricing model would be affected by a fixed operating budget The lack of understanding of how the software is going to be transitioned and rolled-out All of these could have been known in advance and addressed using a requirement-first, risk-based approach. Here is a list of things that you should know when selecting compliance technology: 1. What defines success? What are the intended outcomes for the system? What defines what done looks like? How do you measure progress towards done? What steps are critical to achieving done? What risks need to be addressed that hinder achieving done? What opportunities should be pursued to increase the likelihood of getting to done? 2. What is the purpose for the software purchase? Technology replacement? Architecture alignment? Process improvement? Improved compliance? New capabilities? Increase or decrease in scale or complexity? Cost reduction? Introduction of best practice?. Point solution or platform to support multiple solutions.? 3. What are all the requirements for the expected use of the software? System, application, process, and other functional requirements? Compliance, security, data, privacy, and sovereignty requirements? Platform, network, communication, and other technical requirements? Performance, and reliability requirements? Customization, and integration requirements? Implementation, sustaining, and end-of-life requirements? Backup and recovery requirements? 4. What strategies will be used to introduce and sustain the use of the software? Lift and Shift - Improve processes first then shift? Shift and Lift - Shift to the new software first and then improve processes? All users at once or a phased roll-out? All modules at once or a phased roll-out? Distributed or centralized support? Business owners or IT support? 4. What are the impacts and risks associated with the choice in software, implementation strategies, and sustaining activities on the business What gaps in requirements need to be addressed by customization, work-around, or additional software? What is the total cost and budget needed to sustain and use this software over its anticipated lifetime? How is compliance maintained during and after the implementation? How will changes to the software or configurations be managed and validated? What actions are needed to address uncertainty in: capabilities, cost, user acceptance. ability to meet compliance obligations, and so on? Who owns the data and will the data be monetized by the vendor? How and when will breaches in service be communicated? What is your exit strategy and when will this be triggered should you need to revert to your second choice?

  • Digital Transformation - Exploiting the Power of Digital Technology

    Digital Transformation Over the last several decades companies have invested in paper-on-glass solutions as part of their digital progression. However, what only a few companies have done is change their processes to exploit the power of their digital technology. Dr. Goldratt, developer of the Theory of Constraints, speaks to this issue directly: "Technology can bring benefit if, and only if, it diminishes a limitation. Long before the availability of technology, we developed modes of behavior (policies, measurements and rules) to help us accommodate our limitations. But what benefits will any technology bring if we neglect to change the rules?" To achieve the benefits from technology, Dr. Goldratt suggests answering the following questions: What is the power of the technology? What limitation does the technology diminish? What rules enabled us to manage this limitation? What new rules will we need? The answer to the last question is most critical. To increase your return on investment from digital transformation you must change the way you currently do things. To do otherwise will: Limit your benefits to efficiency at the expense of improving effectiveness. As an example, converting paper forms to electronic forms and routing them around electronically may improve overall process time but will not achieve the benefits available using the power of the new technology. One of the limitations that paper-based systems had was its inability to use data to adapt the process to contend with risk. This often manifested itself in having complicated processes to accommodate every situation along with the need to incorporate multiple layers of approvals. However, using digital technology, it is possible to adapt work processes and incorporate the appropriate level of approvals based on collected information to contend with different levels of risk. Risk-based Process By removing the limitation of static workflows companies can benefit from using adaptive work processes resulting in even greater efficiency but also increased effectiveness at contending with uncertainty.

  • Traditional versus Operational Approach to Compliance

    Compliance is the outcome of meeting obligations which requires compliance to be operational. Compliance operability is achieved when essential functions, behaviours, and interactions exist at levels sufficient to produce a measure of effectiveness – this defines Minimum Viable Compliance (MVC). Traditional approaches never reach MVC until the very end which is too slow and often too late to protect value creation and stay ahead of risk. The good news is there is a better way to do compliance that delivers benefits sooner, with greater certainty, and less waste. This approach is based on Lean Startup model by Eric Ries which we have adapted to the compliance domain as shown in the following diagram: Traditional versus Operational Approach to Compliance The traditional approach is based on implementing components or the parts of the compliance function starting at the bottom and advancing in capability and maturity until the last phase is reached. This is when effectiveness happens as measured against realized outcomes. This is also when effectiveness can start to improve over time. The operational approach is based on first achieving operability which is the minimum level of capability for creating outcomes - a measure of effectiveness. Advancement in capability and maturity happens across all functions, behaviours, and interactions always tied to realizing higher levels of effectiveness. This provides the maximum amount of learning with the minimum amount of cost creating less waste while delivering benefits sooner. The operational approach has improved the development of products and services particularly when contending with uncertainty and achieving outcomes are important. This is the case for all organizations under performance and outcome-based regulation.

  • Assurance is an OUTCOME not an ACTIVITY

    Assurance is not an activity that compliance does or something that can be inspected into a business. It is an outcome that is created when stakeholders have confidence that an organization is meeting all its obligations today and will continue to be meet them in the future. This confidence is necessary for assurance and ultimately for trust to exist. Assurance is an OUTCOME not an ACTIVITY That's why confidence levels are an important measure of success for all risk & compliance programs. Improving the level of confidence is therefore an important objective which often involves conducting audits to verify process outputs and validate program outcomes. However, conformance to procedures and processes, as important as that may be, are not enough to provide the necessary confidence for trust to be granted. Confidence is increased when companies take steps to make certain that promises are kept. This has more to do with improving the probability that the organization is heading in the right direction, operating between the lines, and is making progress towards its mission objectives. The best way that this is demonstrated is by having an operational compliance program to properly contend with obligation and operational risk. An effective compliance program will ensure that required capabilities and performance exist to meet all obligations today and in the future. These capabilities will include resiliency, sustainability, quality, safety, diversity, or any of the abilities that contend with the risks that matter to the organization. Measuring effectiveness of these capabilities is not something that traditional audit or assurance functions have done. However, this is what is now required to provide confidence that the business has a future. To improve the outcome of assurance the following questions need to be answered: What is the level of confidence that your organization will meet all of its obligations? What capabilities do you need to ensure that you will meet your obligations in the future? What measures can you take to make certain you can keep all your promises? What resources do you need to provide the necessary capabilities and measures? How will you evaluate your progress towards greater levels of assurance?

  • AI Assistants - Threat or Opportunity?

    AI Assistants - Blessing or Curse? The rise of Generative AI has taken the world by storm, and AI assistants are popping up all over the place, providing a new way for people to approach their work. These assistants automate repetitive and time-consuming tasks, enabling individuals to focus on more complex and creative work. However, for some, it is just an improvement in productivity, and they question whether the use of AI assistants may lead to them losing their jobs. For those starting to use AI assistants, they are indeed a blessing, providing much-needed relief for overworked employees. The improved productivity is creating needed capacity and some extra space in already full workloads. However, this is expected to be short-lived as these benefits become normalized and expected. The buffer we now experience will be consumed and used for something – the question is what? No wonder there is a fear that the widespread use of AI assistants may lead to significant job reductions. Some jobs will be redundant, while others will be expected to double their workloads. For instance, if someone used to write ten articles a week, they may now be expected to do twenty using AI assistants. So, where is the real gain for the organization apart from fewer people and perhaps marginal cost reductions? Is this the same story of bottom line rather than top-line thinking. How To Use AI Assistants To Achieve Better Outcomes The key to realizing transformational benefits of AI lies in adapting businesses to fully exploit the capabilities of these tools, without exploiting the people impacted by the technology. Dr. Eliyahu Goldratt (Father of the Theory of Constraints) believed that technology could only bring benefits if it diminished a limitation. Therefore, organizations must ask critical questions to exploit the power of AI technology: What is the power of the new technology? What limitation does the technology diminish? What rules enabled us to manage this limitation? And most importantly, what new rules will we now need? Keeping the old limitations that we had before the new technology limits the benefits we can realize. It is by removing the old rules and adopting new ones that creates transformational benefits. By providing credible answers to these questions, organizations can achieve a return on investment that is both efficient and effective, enabling their employees to focus on higher-level tasks and achieve more significant outcomes – higher returns not just lower costs. This will enable companies to move beyond the short-lived relief of AI and realize its true potential as a transformational tool. Which Path Will You Take? The use of AI will be a threat for some but an opportunity for others. If history repeats itself many organizations will adopt AI assistants, realize the efficiency gains, and pat themselves on the back for a short-term win. However, as these benefits become normalized they will soon be back to where they began. Any gains they might have realized will be lost and they will be left doing more with less except now with their new AI assistant. On the other hand, there will be others who asked the right questions, changed existing processes, and created new rules that will enable them to reap the full benefits of AI technology. They will realize compounding benefits that will accrue over time. What the future holds will depend on which path you take and your willingness to take a longer term perspective focused on improving outcomes rather than just reducing costs. Which path will you take?

  • Measures without Measures is a Waste

    When it comes to risk & compliance it is important to identify, collect, and monitor data of all kinds. However, what data should be collected and which is most useful? To answer this it is helpful to consider two principle meanings behind the word measure: Measurement - Estimate or assess the extent, quality, value, or effect of something Method - A plan or course of action taken to achieve a particular purpose The first meaning uses the word measure to refer to measurements usually tied to values and most often the counting of things: How many injuries did we have this year? How many complaints did we receive? What was the amount of green house gas emissions this year? These are the easiest to capture and are useful to provide the status or condition of a particular risk or compliance system. The second meaning of measure refers to a plan or course of action to achieve an effect or result. These measures or you could say methods take the form of controls to achieve specific risk & compliance objectives. W. Edward Deming reminds us that, “ A goal without a method is nonsense.” Similarly, for risk & compliance – methods without measurements is also nonsense. While it is essential to know the status of risk & compliance system it is also important to know the effectiveness of the measures that are keeping an organization operating between the lines and within a specified level of risk. These are most useful when assessing the performance of a risk & compliance program. Measuring the effectiveness of risk & compliance controls (i.e. measures) will help to identify if the underlying systems are capable of keeping an organization in compliance today and in the future. Measures of effectiveness and performance are some of the best predictors of organizational resiliency. Unfortunately, many organizations do not measure the effectiveness of their risk & compliance controls. Work is done but without the assurance that this work will produce the desired effect or result. These companies have measures without measures which is waste. To reduce this waste the first step is to evaluate the effectiveness of the most critical risk & compliance controls. Effectiveness will be connected with progress towards targeted outcomes and objectives. Identifying which controls are effective will form the basis for determining which should be eliminated or improved.

  • The Taxonomy of an Obligation

    When it comes to improving compliance it is important to know not only what your obligations are but also how each obligation has been designed to perform the regulation function. Knowing this will help organizations better understand what is needed to meet their obligations by understanding: The level of compliance rigour required. The level of support needed from leadership and management Controls that may need to be established Who is accountability for which part (self, industry, or government) How best to improve compliance What level of investment to make What is at stake and the level of risk Among other things All of which are derived from the obligation design. Four Obligation Designs There are four common ways that obligations are architected to regulate aspects of quality, safety, environmental and legal concerns. These can be described across the dimensions of micro-macro and means-ends parameters: Four Primary Regulatory Design Approaches Prescriptive-based (micro/means ) - rules that if followed will reduce risk. Management-based (macro/means) - processes that must be followed to manage obligations and risk. Performance-based (micro/ends) - specific measures that must be followed to achieve targeted performance targets. Outcome-based (macros/end ) - targeted outcomes that must be advanced. Obligation Taxonomy Each compliance design approach will in turn create different demands on an organization which can be discovered by considering where the regulation function is being applied to structure of the obligation: Obligation Taxonomy Outcome-based regulations specify the ends or the outcomes and not the means. The onus is on organizations and industry to determine the means, the performance criteria and the rules that should be followed. This is an example of self-regulation and where leadership is essential at all levels to advance outcomes. Performance-based regulations specify the level of performance to achieve the desired outcomes but not the means or the rules that should be followed. This is common with industry programs to achieve zero fatalities, zero emissions, incidents, breaches, and so on. Continual improvement is necessary to advance the desired outcome. In this case, industry associations act as the regulator and take on some of the leadership responsibilities. Prescriptive–based designs specify the details and does not specify performance or outcomes just the rules to follow. This the primary form of government regulation which takes on responsibility to achieve the desired outcomes. Organizations are expected to conform to the rules. Leadership is still important but perhaps less or in a different way. Following rules requires a culture of conformance rather than a culture of improvement and proactivity. Management-based designs like ISO 14000 and 19600 more generally focus on the processes by which you manage obligations. What is being regulated are the management processes not necessarily performance, or outcomes. This makes management standards applicable to all forms of regulatory designs, however, with the caveat that this only happens when organizations incorporate performance and outcome standards along side of their management systems. Leadership is essential at the program level to ensure that effectiveness is not lost in the pursuit of consistency and efficiency. Regulatory bodies and standards organizations may elect to use a combination of the four regulatory designs based on the nature of the risks they are attempting to ameliorate through regulation. Compliance analysts should be aware of this when they identify obligations and evaluate compliance risk. Obligation registers should include this information to help inform the actions for effective compliance. Related Posts: https://www.leancompliance.ca/post/an-objective-view-of-obligations

  • The Compliance Case for Sovereign AI Data Centres in Canada

    Canada's sovereign AI infrastructure is being built right now. Federal investment is flowing into domestic compute capacity. New privacy legislation is imminent. Environmental scrutiny of AI energy consumption is intensifying. AI governance frameworks are formalizing. And the compliance obligations facing data centre operators span seven distinct domains — each evolving independently, many of them overlapping in what they demand from the same operational activities. The organizations that build compliance capability into their operations from the start will have a structural advantage over those that try to retrofit parallel systems after the fact. I've prepared an executive briefing for Chief Compliance & Risk Officers and senior leaders responsible for data centre compliance and operational governance. It maps the full regulatory landscape and outlines a proven approach to managing it without the overhead of parallel compliance programs. The briefing is attached.

  • Taking Ownership: The First Step to Operational Compliance

    For decades, compliance has been one of the most reactive functions in the enterprise—more reactive than finance, operations, or even IT. While there are reasons why this is the case, this excessive reactivity has created a mission-critical gap: a dangerous vacuum where managerial accountability should exist but has been replaced with busywork. The Abdication Problem Managers, for the most part, have quietly abdicated their compliance responsibilities. They've handed them off to third-party consultants, delegated them to understaffed compliance departments, or worst of all, outsourced their thinking entirely to external auditors. When audit findings arrive (although not the only measure of effectiveness), these same managers treat them as someone else's problem to fix rather than their failure to prevent. This abdication means obligations go unowned. And unowned obligations don't get fulfilled—they get tracked, reported on, and documented, but not actually fulfilled. The organization drifts outside the lines, remains blind to emerging risks, and loses sight of its mission while everyone points to procedures that nobody truly owns. Why "Be Proactive" Doesn't Work The obvious answer seems to be: stop being reactive and start being proactive. Get ahead of issues. Anticipate problems. Be forward-thinking. If only it were that simple. Telling a reactive organization to become proactive is like telling someone who can't swim to simply start swimming better. The problem isn't their technique—it's that they haven't learned to stay afloat. You cannot be genuinely proactive about obligations you don't actually own. Ownership Comes First The path forward begins with a foundational shift: organizations must take ownership of their obligations and the risks those obligations address. Not delegated ownership. Not documented ownership. Real ownership—where specific people accept responsibility for ensuring specific promises are kept and specific hazards are controlled. This means: Managers understanding their obligations as personal commitments, not corporate procedures Leaders recognizing that compliance risk is operational risk, not a separate concern Executives accepting that audit findings represent their management failures, not their auditors' discoveries What AI Cannot Do And if you thought AI can help you with this, you will be left wanting. Here's the thing: AI cannot take ownership of your obligations. It can't even take ownership of its own outputs. AI might be able to analyze some of your compliance gaps, generate your procedures, monitor your controls, and flag your risks—assuming you even have a complete set of those. It can make compliance activities faster, cheaper, and more efficient. But it cannot look your stakeholders in the eye and promise them anything. It cannot accept accountability when things go wrong. It cannot decide what matters and what doesn't. Ownership is an irreducibly human act. It requires judgment, commitment, and the willingness to be held responsible. These aren't features that can be automated or algorithmic capabilities that can be trained. They're moral choices that only people can make. Organizations rushing to deploy AI for compliance are often doing so precisely to avoid ownership—creating yet another layer of delegation, another place to deflect accountability. "The system didn't flag it" becomes the new "the auditor didn't catch it." Until Ownership, Nothing Changes Without this ownership foundation, compliance will remain exactly as it is: reactive, fragmented, and procedural. It won't improve. It won't integrate into operations. It won't create value. Organizations will continue generating documentation that nobody reads, attending training nobody remembers, and responding to findings nobody prevents. They'll add AI tools to the stack, automate the busywork, and still fail to keep their promises because nobody has actually accepted responsibility for keeping them. The transformation to operational compliance—where obligations become capabilities and compliance creates value—cannot begin until someone looks at the organization's promises and risks and says: "These are mine. I own them." Everything else follows from that moment. Nothing meaningful happens before it. And no technology, no matter how intelligent, can say those words for you.

  • First Principles of Design: Necessary Variation

    If you work in quality or lean, you have been trained to treat variation as the enemy. Deming, Taguchi, Six Sigma — the entire discipline is built on reducing, controlling, and eliminating variation. And that discipline is not wrong. But it is incomplete. Without variation, you cannot have two of anything. If no variation were permitted — if every instance of a thing had to be absolutely identical in every respect — production would be impossible. Every piece of raw material is slightly different. Every cut, every weld, every assembly happens under slightly different conditions. Variation is not a defect in the manufacturing process. It is the precondition for manufacturing to exist at all. It is what makes multiplicity — multiple instances of the same thing — possible. The question was never whether to have variation. The question is which variation is necessary and which is not. And answering that question requires something that comes before any control chart or process capability study: you have to decide what the thing *is* — and what it is not. Identity: Deciding to Build This and Not That Before a single sketch is drawn, someone decides that the world needs a hammer and not a spoon. This is an ontological commitment — a decision about what will exist and what won't. It establishes the boundary between what you are building and what you are not building. That commitment carries a second obligation: defining what is essential for this thing to be this thing. A hammer requires a handle, a weighted head, a striking surface. Remove any of these and you no longer have a hammer. You have a stick, a paperweight, something else entirely. These are the characteristics without which the thing ceases to be what it was committed to be. Everything that follows depends on these choices. Multiplicity: Designing What This Is and What This Is Not With the essentials established, the engineer faces a design decision: what must be allowed to differ so that you can build more than one? You cannot use the same piece of steel twice. You cannot use the same piece of wood twice. Every unit requires its own instance of material, its own act of assembly, its own moment in time — and no two instances are identical. Head weight within a given range, handle length within a given tolerance, surface finish within acceptable limits. These are not concessions to imperfect manufacturing. They are what makes multiplicity possible. Without designed variation, you can build one thing on paper. You cannot produce it in the world. Without both — without defining the identity and the acceptable variation — you cannot produce a single unit, let alone a thousand. The Rub Here is where engineering demands expertise. Specify too precisely and you cannot build the thing. Real materials vary. Real processes drift. Real conditions fluctuate. If every tolerance is pushed to its theoretical limit, you have designed something that can only exist on paper — the variation inherent in parts, materials, and assembly will exceed what the specification allows. You will reject everything. You will build nothing. Specify too loosely and you build things that are not the thing. Units come off the line that technically pass inspection but fail in the field. You have non-conformances that you cannot call non-conformances, because the specification never drew the line clearly enough to say what conforms and what does not. The engineer's expertise lives in this tension: defining identity tightly enough that the thing remains itself, and defining variation broadly enough that it can actually be made. Every tolerance, every acceptance criterion, every specification range is a negotiation between the ideal and the achievable. Get it wrong in either direction and you lose. Over-constrain and production stops. Under-constrain and quality disappears. Why This Matters for Compliance In a previous post — Compliance and the Problem of Evil — I argued that every compliance failure is an absence: the privation of a good that ought to be present. But I left a question hanging: where does that positive definition come from? The design. The design is the positive definition. It declares both what something is and what it is not — the identity and the acceptable variation, the boundaries within which a thing remains itself and beyond which it becomes something else. Without both declarations, the concepts of defect, failure, and non-compliance have no anchor. A defect is not "something that looks wrong." It is variation outside the boundaries the design established. A safety failure is not "something bad happened." It is the absence of a capability the design required to be present. This is the bridge between engineering and compliance. The engineer designs the good — the identity *and* the necessary variation — and compliance is the discipline of sustaining both through production, operation, and change. Quality, safety, security, sustainability — each is a dimension of that design, a promise about what the thing will be, what it will not be, and what it will continue to be. No design, no identity. No identity, no boundaries. No boundaries, no way to tell necessary variation from unwanted variation — just randomness wearing a label. First Principles Engineering is about building things. But building always starts with a design — the act of defining what something is and what it is not, what must remain the same and what must be allowed to differ. This is what makes it possible to know what is a defect and what is not. What is safe and what is not. What is secure and what is not. What is compliant and what is not. Without the design — without defined identity and defined variation — none of these judgments have a foundation. They are opinions, not assessments. The first principle of design is knowing which variation to control and which to permit. Get that right, and every downstream judgment — quality, safety, security, sustainability — has a basis. Get it wrong, and you are either unable to build or unable to know what you have built. When it comes to design, you have to do more than decide between this and that. You have to decide what this is — and what it is not.

  • You can't turn lagging into leading indicators no matter how hard you try

    Lagging versus Leading Indicators The Challenge Counting near misses, incidents, defects, violations, and other non-conformance is of value and necessary as part of prescriptive: regulation, industry standards, and internal policies. However, when it comes to complying with performance and outcome-based commitments where the goal is to achieve zero fatalities, zero explosions, zero violations, and zero defects then you need a risk-based process that uses proactive actions informed by both lagging and leading indicators. While many companies are rich in lagging indicators they are poor in leading indicators. To address this, many attempt to turn lagging indicators into leading indicators which is not possible no matter how hard you try. Although, with proactive oversight you can turn lagging indicators into leading actions (more on this later). Many organizations try to use measures of conformance to predict and possibly prevent future occurrences. However, lagging indicators of this kind can never distinguish between whether your risk controls are effective or if you were just "lucky". They are also too late to prevent what has already occurred and for those looking to improve safety, quality, environmental, or regulatory outcomes this is a big deal. Lagging Indicators and Actions Lagging indicators measure what has already happened specifically after a risk event has occurred. Lagging indicators are always retrospective, too late, and of no value with respect to the past events. Lagging indicators are still beneficial as they help to identify failure modes or vulnerabilities albeit after the fact. This data can in turn be used to initiate actions to mitigate the effects of the adverse event, which is considered as a corrective and lagging action. Lagging indicators can also be used to strengthen control processes to prevent re-occurrence of the unwanted event or mitigate its effects. This is a preventive action and leading with respect to future risk . Leading Indicators and Actions Leading indicators , on the other hand, are derived from the control processes that are in place to prevent unwanted events before they happen. They are on the left side of the bowtie diagram and before the risk event. Leading indicators include measures of effectiveness of the preventive controls which are predictive in terms of the likelihood of a given risk event. Leading indicators must have predictive power to be considered effective. The effectiveness of controls contributes to the probability of occurrence of the risk event. Leading actions are steps taken to improve the effectiveness of both preventive and mitigative controls to improve the level of protection to achieve an acceptable level of risk which is the purpose of risk management and the standard for overall compliance effectiveness. Bottom Line Lagging indicators can never be leading as they measure things after the risk event. They may have utility to predict future risk events but this is limited as they often measure things related to symptoms not the root cause. The best leading indicators are those that have predictive utility and connected to preventive controls. This information provides advance warning of a possible risk event and an opportunity to do something about it.

  • Promise Agents: Autonomous Policy Fulfillment in Security Architecture

    The systems that run our world make implicit promises — to route traffic, to process transactions, to keep data where it belongs. Most of those promises are never explicitly declared, never monitored against, and never reported on until something breaks. Promise Theory, the framework Mark Burgess developed to model autonomous commitment, sits at the heart of the Lean Compliance methodology. This briefing extends it further, asking what becomes possible when security infrastructure is designed to keep its promises the way we expect people to keep theirs. Most current thinking places AI at the monitoring or response layer: detecting anomalies, flagging incidents, accelerating analyst workflows. That is useful, but it still treats the underlying security equipment as passive infrastructure, governed by static rules and assessed from outside. Mark Burgess, who developed Promise Theory and built CFEngine on its principles, had a different intuition — one rooted in a security problem he identified before most of us were thinking about it. His observation was that the command-and-control model of managing devices was itself producing vulnerabilities. A device designed to receive and execute external commands is a device that can be exploited by anyone who can issue those commands. His response was to model a different design principle: devices that govern themselves from within by declaring what they will do, rather than waiting to be told. Autonomy, in his framework, is not just an architectural preference. It is a security property. He found a concrete example of this already operating in live infrastructure: BGP — the Border Gateway Protocol that governs routing between the large independent networks that make up the internet. BGP routers do not wait for a central controller. They declare their routing promises to neighboring routers and cooperate through voluntary exchange of those declarations. Burgess states this directly: "BGP is a promise-based system." Each router is already a promising agent, governing itself from within, building trust through its history of kept promises. That is the design principle. The question worth exploring is what it would mean to apply it to security obligations — not routing tables, but the high-level commitments an organization makes about what its infrastructure will and will not allow. I have written a briefing note that develops this as a formal proposal: **Promise Agents** — security equipment with embedded, fine-tuned AI models that receive obligations, assess what they can genuinely commit to, declare those commitments as promises, fulfill them autonomously, and monitor their own performance against them continuously. The briefing covers the theoretical foundation in Promise Theory, the BGP precedent Burgess himself identifies, the problem that makes this direction worth considering, the architecture it implies, and the prerequisites that would need to be in place before it becomes buildable. It is offered as a starting point for discussion — not a finished design, but a direction worth examining for security architects, compliance practitioners, AI engineers, and equipment vendors who may see potential in it. The full briefing note is linked below. I would welcome responses from anyone working in these areas. Raimund (Ray) Laqua, P.Eng., PMP is the founder of Lean Compliance Consulting, helping organizations build compliance as operational capability rather than procedural overhead. He serves on ISO's ESG working group and OSPE's AI in Engineering committee, and chairs the AI Committee for Engineers for the Profession (E4P), where he advocates for federal licensing of digital engineering disciplines in Canada.

bottom of page