top of page

SEARCH

Find what you need

588 results found with an empty search

  • Rasmussen's Risk Management Framework

    At a fundamental level compliance programs protect the value stream from threats that hinder the creation of value. Each program contributes to keeping the value chain safe from various risk including: quality risk, occupational safety risk, security risk, and so on. These programs are socio-technical in nature in that they recognize the interaction between people and technology often across multiple levels of organization. Rasmussen's Risk Management Framework (also known as Rasmussen's ladder) provides useful insights when it comes to understanding risk across social-technical boundaries to achieve safety objectives along with other risk objectives. Rasmussen originally developed his approach as part of a proactive risk management strategy, however, its primary application has been as an accident analysis tool (ACCIMAPS) for complex socio-technical systems. Rasmussen's Risk Management Framework This framework has its roots in systems thinking based on the notion that accidents are hidden in normal operations and do not need special causes. This is similar to Safety II (Holnagel, 2017), and Deming's work that defects are caused by normal causes (natural variation). Rasumussen's model and others since represent a growing trend away from "root causes" or you might say "special causes" for systemic failures. Rasmussen' suggests the following system boundaries by which to map structure, components and their interactions: Rasmussen's Risk Management Framework The structure of Rasmussen's Risk Management Framework considers six levels: Government - where laws and regulations are developed; Regulatory - where industry standards are developed based on laws and regulations; Company - where company policies and procedures based on industry standards govern work processes; Management - where company policies and procedures are implemented; Staff - representing the activities and characteristics of workers performing the processes; and Work - representing the equipment and environment by which work happens Vertical integration is required for the system to function safely. This means that decisions made at the higher levels should propagate down the hierarchy as information flows upwards. The interaction and dependencies across levels are critical to ensure that intended safeguards protect system states. Threats to safety result from a loss of control caused by inadequate vertical integration across levels, not just from deficiencies at any one level. Nancy's Leveson [2] provides an example of how this can be used to model safety control: Nancy Leveson - Hierarchical Model of Safety Control Framework Predictions Rasmussen's Risk Management Framework makes a series of predictions[1] in relation to performance and safety in complex socio-technical systems: Safety is an emergent property of a complex socio-technical system. They are impacted by the decisions of all of the actors – politicians, managers, safety officers and work planners – not just the front-line workers alone. Threats to safety are usually caused by multiple contributing factors , not just a single catastrophic decision or action. Threats to safety usually result from a lack of vertical integration (i.e. mismatches) across levels of a complex socio-technical system, not just from deficiencies at any one level alone. The lack of vertical integration is caused, in part, by a lack of feedback across levels of a complex socio-technical system. Actors at each level cannot see how their decisions interact with those made by actors at other levels, so the threats to safety are far from obvious before an incident. Work practices in a complex socio-technical system are not static . They will migrate over time under the influence of a cost gradient driven (see Drift to Failure below) by financial pressures in an aggressive competitive environment and under the influence of an effort gradient driven by the psychological pressure to follow the path of least resistance. The migration of work practices can occur at multiple levels of a complex socio-technical system, not just one level alone. Migration of work practices causes the system’s defenses to degrade and erode gradually over time. Accidents are induced by a combination of this systematically induced migration in work practices and a triggering event, not by an unusual action or an entirely new, one-time threat to safety. Drift to Failure Rasmussen also identified a phenomenon which he called, "drift to danger." systemic migration of organizational behavior toward accident under the influence of pressure toward cost-effectiveness in an aggressive, competing environment Rasmussen's Migration Model - Transport Canada - Jim McMenemy, Safety Intelligence Project Rasmussen's migration model represents constraints (i.e. economic, workload, safety) which create the following possibilities: If the system reduces output too much, it will fail economically and be shut down If the system workload increases too far, the burden on works and equipment will be too great If the system moves in the direction of increasing risk, accidents will occur In essence, accidents occur when the system's activity crosses the boundaries into unacceptable safety. Application Rasmussen's Risk Management Framework provides a good representation of the real world and has been used to better understand safety risk in dynamic, social-technical systems. Accimaps[3] which are derived from Rasmussen's framework provide a generic and flexible approach since they do not use predefined taxonomies of hazards or failures across the various levels. Accipmaps have been used in aviation, defense, oil&gas, risk management, public health, patient safety, and environmental studies. Rasmussen's framework also provides the means to better understand how to achieve other risk objectives such as quality, resilience, reputation, financial, and trust. References: [1] A.L. Cassano-Piche, K.J. Vicente and G.A. Jamieson, "A test of Rasmussen’s risk management framework in the food safety domain: BSE in the UK", 2009, Theoretical Issues in Ergonomics Science [2] Nancy G. Leveson, "Rasmussen’s Legacy: A Paradigm Change in Engineering for Safety", 2015, [3] Justen Debrincat, "Assessing Organizational Factors in Aircraft Accidents using a Hybrid Reason and AcciMap Model", 2012, RMIT University

  • Where Does the Source of Truth Live When AI Agents Do the Work?

    Raimund Laqua, P.Eng., PMP For decades, the system of record has been the gravitational centre of the enterprise. Your ERP, your CRM, your quality management system — whatever the acronym, the function was the same. One place where the authoritative version of the truth lives. Every audit trail starts there. Every compliance obligation traces back to it. Machines have always done part of the work inside these systems — workflows, automated triggers, batch processing. But that work was governed, designed, and built by humans. Every automated step was deliberately engineered into a known path. The system of record captured the work because the work was designed to flow through it. And when governance was working, the how mattered as much as the what — the work reflected the organization's values, its commitments, and its obligations. But what happens when autonomous agents do the work? The Shift AI assistants are already embedded in the platforms organizations use — Microsoft's Copilot across Dynamics 365, SAP's Joule, Salesforce Einstein. Today, a person still asks the question and still decides what to do with the answer. But the vendors aren't stopping there. They're explicitly moving from "system of record" to "system of intelligence." The system no longer waits for people to analyse data. It acts. That's not an assistant — that's an agent with authority. And it changes everything about where the source of truth lives. It's important not to confuse this with the workflow automation we've had for decades. A workflow is a deterministic path — designed by humans, with each step predefined, each decision point mapped, and the system of record baked into the sequence. The workflow writes to the system because the system is part of the track it runs on. An autonomous agent doesn't follow a track. It receives an objective, assesses the situation, decides what to consult, coordinates with other agents, takes action, and moves on. It may take a different path every time depending on what it finds. The system of record isn't part of that path unless someone engineered it to be. No human is watching each decision. No human can — not at the speed and scale agents operate. Now ask the question: where is the system of record in that workflow? And more importantly — is the work being done in a way that upholds what the organization stands for? The Problem Work was done. But were the promises kept? Not just "did the agent complete the task" — but did it fulfill the organization's actual commitments? The right output can only be produced the right way. Did it uphold the values behind those commitments — the safety, the quality, the duty of care that the organization promised its regulators, its customers, and its stakeholders? If nobody engineered the agent to track that, the question may not even be answerable. When agents coordinate with other agents to deliver on a shared promise, it gets worse. The source of truth isn't a system anymore. It's distributed across agent interactions that may be ephemeral. The source of truth didn't move to a new system. It dissolved into a process. Whose Promises Is the Agent Keeping? Here's what I think cuts to the heart of it: when we say an agent "works," we usually mean it completes the task and stays within its guardrails. Those are the developer's promises. The promises your organization has made are different. Promises to comply with governance policies. Promises to produce auditable evidence. Promises to fulfill obligations in a way that reflects your values — not just efficiently, but ethically, safely, and transparently. Right now, there's no mechanism in most agent architectures to translate those organizational promises into agent-level operational commitments. The agent doesn't know what you promised the regulator. It doesn't know what your values require. It knows what its developer built it to do. The agent is keeping the developer's promises. Nobody has engineered it to keep yours. Can Agents Follow Governance Policy? This raises a harder question: can autonomous agents follow governance policies at all? Not the way humans do. An agent can follow instructions and be constrained by guardrails. But governance requires understanding intent, interpreting context, and exercising judgment about edge cases the policy didn't anticipate. Agents don't do that. They optimize toward objectives within whatever constraints they were given. If the constraint was well-engineered, the behaviour looks like compliance. If it wasn't, the behaviour looks confident and wrong. There's a deeper problem. Any codified rule will eventually be inadequate because the agent optimizes around the measurable parts and erodes the unmeasurable intent. The letter of the policy gets followed. The spirit gets lost. Not through malice — through the physics of optimization. So agents can fulfill obligations that have been engineered into their architecture. They cannot interpret governance policies the way a competent professional would. And they cannot uphold organizational values unless those values have been translated into operational commitments they're designed to keep. The Humans in the Loop The human in the loop is not only the person reviewing the agent's output after the fact. You can't verify what you can't see. An important human in the loop is the engineer who designs the agent to do the right thing before it's deployed. The accountability is front-loaded into the architecture, or it doesn't exist at all. Governance policy can't be an external constraint layered on after the fact. It has to be engineered into what the agent is — because the right output can only be produced the right way, and the right way has to be built in. The Promise Architecture I've been thinking about this through Mark Burgess's Promise Theory. The core insight: you can only make promises about your own behaviour. An obligation imposed from outside isn't a promise until you've assessed it, determined what you can genuinely commit to, and declared that commitment. Most agent architectures work on an imposition model — do this task, follow these rules. For work that matters, that's not enough. The agent needs to receive the obligation, assess whether it can fulfill it, declare what it can commit to, and fulfill that commitment while producing evidence that the promises were kept — because the right output can only come from a process that upholds the organization's commitments and values. That's what I've been calling a Promise Agent. Policy isn't a reference document the agent consults — it's an operational capability the agent possesses. A fire suppression system doesn't "consult" the fire code. The requirements are engineered into its design. The system embodies the obligation. Delivery matters as much as the declaration. In a human workflow, the person doing the work is also the person creating the record. In an agentic workflow, that coupling breaks. This is why promise delivery can't just mean the agent produced an output. The right output can only be produced the right way — and the agent must demonstrate that the path it took was consistent with what the organization committed to. In Burgess's framework, delivery is continuous — the agent evaluates whether it can still keep delivering, and signals when that capability is compromised. The Golden Thread So where does the source of truth actually live? In the human model, it lives in the system of record. One place, one version, one truth. In the agentic model, that's no longer sufficient. The source of truth lives in the promise architecture — the traceable relationship between four things: the obligation (what the organization committed to), the promise (what the agent declared it could deliver), the delivery (how the agent fulfilled it and whether it did so in a way consistent with the organization's values), and the evidence (the demonstrable record that all three are connected and consistent). What connects these four elements is the golden thread of assurance — the unbroken, traceable line that runs from commitment through promise through delivery through evidence. In a human workflow, the golden thread runs through the system of record. In an agentic workflow, it has to run through the promise architecture — and the system of record becomes where that thread is anchored and made auditable. If the thread breaks at any point, assurance is lost. And any of these elements without a connection to the organization's values is compliance without purpose — the letter without the spirit. The system of record doesn't disappear. But its role shifts from being the source of truth to being the registry of truth — the place where the golden thread is anchored, where the organization can demonstrate that its agents are keeping its promises in a way that upholds its values. Most current systems of record weren't designed for that. They were designed for humans filling in forms. A Design Problem I'm not arguing that the system of record is dead or that fully autonomous agent workflows are the norm yet. But the building blocks are being assembled — in the platforms, in the vendor roadmaps, and in the architectural decisions organizations are making right now. The concern is that the decisions being made today are creating an architecture where agents do work without a golden thread connecting what they did to what the organization promised — and to the values behind those promises. Where nobody can demonstrate that the right output was produced the right way. That's not a technology failure. It's a design failure. And design failures are preventable. If you're deploying AI agents in a regulated environment, the question I'd encourage you to ask is not "what can the AI do?" but "will the AI do it in a way that keeps our promises and upholds our values?" If you don't have a clear answer, you have a design problem that's worth solving now — before it becomes a compliance problem you have to explain later. Raimund (Ray) Laqua, P.Eng., PMP, is a computer engineer and the founder of Lean Compliance Consulting. With over 30 years of experience across regulated industries — oil & gas, pharmaceuticals, medical devices, aerospace, nuclear, and financial services — he developed the Lean Compliance methodology grounded in Promise Theory, cybernetic regulation, and total value chain analysis. Ray is Chair of the AI Committee for Engineers for the Profession (E4P), sits on ISO's ESG working group, serves on OSPE's AI in Engineering committee, and advocates for federal Digital Engineering licensing in Canada. He writes regularly at leancompliance.ca .

  • Digital Threads: The Future of Compliance

    The Future of Compliance - Digital Threads In response to the Grenfell Tower Fire, the UK government recently introduced new regulations and a new regulator to address shortcomings in building safety. This new safety regime is intended to prevent the occurrence of incidents similar to the Grenfell Tower disaster that resulted in 72 deaths in 2017. Among the measures that this regulation introduces is what is being called, "A Golden Thread." This is in fact a "Digital Thread" the first of its kind to be used by regulators to improve compliance. The future of compliance looks like it is here so let's find out what digital threads are all about and why it is so important for compliance. What is a Digital Thread? To understand digital threads we first need to understand digital twins. What is a Digital Thread The concept of digital twins is attributed to Michael Grieves based on a presentation he made in 2002 at the University of Michigan. In this presentation he proposed the digital twin as a conceptual model underlying a product life-cycle with three components: real space, virtual space, and the data between and about them. However, the idea of modelling the real-world with computer simulation is not new and can go back to as early as1960s when NASA used basic concepts of twinning in the development of its space program. What makes digital twins different from computer-based modelling are the connections between the real and virtual worlds. In essence, a model becomes a digital twin when it connected with its real life counterpart. This connection closes the loop and is referred to as the digital thread. How are digital twins and threads defined today? Digital Twin The definition commonly used in defence, aerospace and related industries in the US is: “an integrated multiphysics, multiscale, probabilistic simulation of an as-built system, enabled by Digital Thread, that uses the best available models, sensor information, and input data to mirror and predict activities/performance over the life of its corresponding physical twin.” A digital twin is a virtual representation of real-world entities and processes, synchronized at a specified frequency and fidelity. This synchronization is enabled by a digital thread infrastructure or framework. Digital Thread The digital thread is used to refer to the lowest level design specification for a digital representation of a physical item. The digital thread is a critical capability in model-based systems engineering (MBSE) and the foundation for a digital twin. However, the term digital thread is also used to describe the traceability of the digital twin back to the requirements, parts and control systems that make up the physical asset. It is this latter aspect which is of significance for compliance specifically where traceability and accountability are regulated. Regulatory Use of Digital Threads: UK Building Safety In 2021 the UK Parliament introduced the Building Safety Bill to address shortfalls in building safety not limited to but largely in response to the Grenfall Tower Fire in 2017. This bill introduces a new regulator and regulation with the purpose that safety is ensured throughout every stage of a building's life. It also addresses specific failures with the lack of accountability and compliance throughout design, construction, and operations. UK - The Golden Thread The concept of a digital thread will now be part of this regulatory regime to provide traceability of information so that nothing falls between the cracks. This digital thread is not necessarily part of a digital twin but will instead become a measure of compliance and a critical one. Using the name "Golden Thread" to describe this particular application makes sense. It is an idea or feature that is present in all parts of something, holds it together and gives it value (Oxford's Learner's Dictionary); and in this case the value is improved safety. The Building Safety Bill further defines The Golden Thread: Full Definition: The golden thread is both the information that allows you to understand a building and the steps needed to keep both the building and people safe, now and in the future. The golden thread will hold the information that those responsible for the building require to: (a) how that the building was compliant with applicable building regulations during its construction and provide evidence of meeting the requirements of the new building control route throughout the design and construction and refurbishment of a building (b) Identify, understand, manage, and mitigate building safety risks in order to prevent or reduce the severity of the consequences of fire spread or structural collapse throughout the life cycle of a building The information stored in the golden thread will be reviewed and managed so that the information retained, at all times, achieves these purposes. The golden thread covers both the information and documents, and the information management processes (or steps) used to support building safety. The golden thread information should be stored as structured digital information. It will be stored, managed, maintained, and retained in line with the golden thread principles (see below). The government will specify digital standards which will provide guidance on how the principles can be met. The golden thread information management approach will apply through design, construction, occupation, refurbishment, and ongoing management of buildings. It supports the wider changes in the regime to promote a culture of building safety. Building safety should be taken to include the fire and structural safety of a building and the safety of all the people in or in the vicinity of a building (including emergency responders). Many people will need to access the golden thread to update and share golden thread information throughout a building’s lifecycle, including but not limited to building managers, architects, contractors, and many others. Information from the golden thread will also need to be shared by the Accountable Person with other relevant people including residents and emergency responders. The Golden Thread is based on the following principles which you could also consider as system properties: Principles: Accurate and Trusted: the dutyholder/Accountable Person/Building Safety Managers and other relevant persons (e.g. contractors) must be able to use the golden thread to maintain and manage building safety and ensure compliance with building regulations. The Regulator should also be able to use this information as part of their work to assess the compliance with building regulations, the safety of the building and the operator’s safety case report, including supportive evidence, and to hold people to account. The golden thread will be a source of evidence to show how building safety risks are understood and how they are being managed on an ongoing basis. The golden thread must be accurate and trusted so that relevant people use it. The information produced will therefore have to be accurate, structured, and verified, requiring a clear change control process that sets out how and when information is updated and who should update and check the information. Residents feeling secure in their homes : residents will be provided information from the golden thread – so that they have accurate and trusted information about their home. This will also support residents in holding Accountable Persons and Building Safety Managers to account for building safety. A properly maintained golden thread should support Accountable Persons in providing residents the assurance that their building is being managed safely. Culture change : the golden thread will support culture change within the industry as it will require increased competence and capability, different working practices, updated processes and a focus on information management and control. The golden thread should be considered an enabler for better and more collaborative working. Single source of truth: the golden thread will bring all information together in a single place meaning there is always a ‘single source of truth’. It will record changes (i.e. updates, additions or deletions to information, data, documents and plans), including the reason for change, evaluation of change, date of change, and the decision-making process. This will reduce the duplication of information (email updates and multiple documents) and help drive improved accountability, responsibility and a new working culture. Persons responsible for a building are encouraged to use common data environments to ensure there is controlled access to a single source of truth. Secure: the golden thread must be secure, with sufficient protocols in place to protect personal information and control access to maintain the security of the building or residents. It should also comply with current GDPR legislation where required. Accountable: the golden thread will record changes (i.e. updates, additions or deletions to information, data, documents and plans), when these changes were made, and by who. This will help drive improved accountability. The new regime is setting out clear duties for dutyholders and Accountable Person for maintaining the golden thread information to meet the required standards. Therefore, there is accountability at every level – from the Client/Accountable Person to those designing, building or maintaining a building. Understandable/consistent: the golden thread needs to support the user in their task of managing building safety and compliance with building regulations. The information in the golden thread must be clear, understandable and focused on the needs of the user. It should be presented in a way that can be understood, and used by, users. To support this, dutyholders/Accountable person should where possible make sure the golden thread uses standard methods, processes and consistent terminology so that those working with multiple buildings can more easily understand and use the information consistently and effectively. Simple to access (accessible) : the golden thread needs to support the user in their task of managing building safety and therefore the information in the golden thread must be accessible so that people can easily find the right information at the right time. This means that the information needs to be stored in a structured way (like a library) so people can easily find, update and extract the right information. To support this the government will set out guidance on how people can apply digital standards to ensure their golden thread meets these principles. Longevity/durability and shareability of information: the golden thread information needs to be formatted in a way that can be easily handed over and maintained over the entire lifetime of a building. In practical terms, this is likely to mean that it needs to align with the rules around open data and the principles of interoperability – so that information can be handed over in the future and still be accessed. Information should be able to be shared and accessed by contractors who use different software and if the building is sold the golden thread information must be accessible to the new owner. This does not mean everything about a building and its history needs to be kept, the golden thread must be reviewed to ensure that the information within it is still relevant and useful. Relevant/proportionate : preserving the golden thread does not mean everything about a building and its history needs to be kept and updated from inception to disposal. The objective of the golden thread is building safety and therefore if information is no longer relevant to building safety it does not need to be kept. The golden thread, the changes to it and processes related to it must be reviewed periodically to ensure that the information comprising it remains relevant and useful. These definitions and principles will help set the direction for how digital threads will be built in the compliance domain not only within the UK but also other jurisdictions. What Digital Threads Mean For Compliance Evidence of compliance has always been needed and this means more than attestations as the way to verify that what should have been done was actually done. This approach was always to slow, too late and not always accurate. And that is why the concept of a Golden Thread as a means t o provide evidence and assurance of compliance throughout the design, building and maintenance of buildings is a game changer. However, it will still take time for digital thread infrastructures to be established particularly those that meet the properties outlined for the UK's Golden Thread. At one level digital threads are still retrospective and on the lagging side of risk events. However, they could become more than feed-back processes particularly for downstream activities. When combined with digital twins they could become feed-forward and provide predictive utility particularly when to improve and validate design models. At a minimum digital threads will provide more up-to-date and reliable information for all stakeholders during every stage of building's life cycle. Now that we have defined purpose and properties for digital threads in the compliance domain it is likely that "Golden Threads" will become part of other regulator regimes. Medical device manufacturers are already using digital threads to provide traceability across DHF, DMR, and DHRs. There are also examples of digital threads in Oil & Gas and other regulated industries with respect to safety-critical data. In addition, using digital threads as part of Management of Change (MOC) process may help ensure design integrity as a result of planned changes. Instead of trying to integrate systems together, digital threads may provide a more effective means for compliance critical information to be made available not only as evidence of compliance but as a proactive measure to prevent risk. Proactive organizations should begin to plan pilot projects to explore how digital threads would be used in response to regulatory reforms but also as part of their own internal compliance efforts. If you are interested in developing and implementing digital thread strategies please contact our project management office to learn how Lean Compliance can help. References: GoldenThread.co.uk Developing a Digital Twin and Digital Thread Framework for an ‘Industry 4.0’ Shipyard What Are Digital Twins and Digital Threads? Industry 4.0 How to navigate digitization of the manufacturing sector

  • Demo-first Approach to Selecting Compliance Software

    When it comes to selecting commercial-off-the-shelf (COTS) compliance software there was a time when this involved a structured process based on a requirements-first approach. This has now been largely replaced with a demo-first approach encouraged by cloud vendors as well as by the buyers themselves. Instead of a bake-off compared against requirements, software is now chosen based on how well the software demos and looks. Does this approach result in better outcomes? Let's find out. Requirements-first Approach: A requirements-first approach typically includes the following steps: Request for Information (RFI) – survey of the market to identify candidate vendors and solutions Request for Proposal (RFP) – request for written responses to requirements from identified long list of candidate vendors. Short Listing – a short list is created based on the selection criteria rubric Request for Quotation (RFQ) – obtain firm and final pricing from the short listed vendors Live Test Demonstration (LTD) – make sure the short list of vendors actually meet the stated requirements by following a scripted walk-through. Select Apparent Winning Offeror – selection of the best alternative based on vendor performance, fit for purpose, and technical requirements. Pilot System - validation that the solution can achieve the intended outcomes as well as the verified technical requirements. The purpose of following these steps is to manage risk inherent in selecting a solution that best fits the scope, budget, and requirements. In addition, it also creates level of playing field, keeping everyone honest on both sides of the table. The following data is a compilation across 20 projects that followed a requirements-first approach: * Waterfall = Gated, Structured Approach * Hybrid = Gated, Agile Approach Key lessons learned from these projects include: System scope was the major influence in determining overall procurement cycle time. However, there is only an incremental increase (8 versus 12 months) when considering departmental versus platform solutions. The overall duration was largely determined by vendor and buyer schedules Waterfall approach using approval gates was preferred the larger the project scope In addition, 90% of projects did not purchase their first choice for reasons that included: Failed live test data (LTD) – RFP responses was good but based on software that was not yet available or didn't withstand scrutiny of actual use. Failed pilot system due to poorly understood or specified requirements Requirements changed during the procurement process It is worthwhile stating that each project completed successfully even though it was not with the first choice of vendor. Having a second choice proved to be a significant factor when mitigating the uncertainties experienced during the procurement process. Demo-first Approach: These days it seems that many companies jump right to requesting a demonstration of software without first understanding what it is that they need. While this may prove successful for some applications, when it comes to critical compliance solutions at the scale of the enterprise this can lead to decisions that are less than optimal and waste valuable time, resources, and possibly exposing companies to unnecessary risk. Companies who have used the demo-first approach have noted that these projects tend to produce the following: Scope creep – everyone wants all the capabilities that they see demonstrated Difficulty in making an apples to apples comparison of the alternatives Cost overruns due to unplanned integration, customization, and data migration Schedule overruns leading to late ROI and in many cases unrealized benefits Solutions that only meets rudimentary requirements and not capable of meeting the full demands of the organization Loss of data and information due to insufficient planning and resourcing for data cleansing and migration activities In addition, projects still end up taking the same amount of time to procure a solution as with a requirements-first approach. Although, in the case of a demo-first approach they tend not to follow a risk-based process. This makes them vulnerable to uncertainties that an RFP, LTD, and Pilot steps would have discovered. Companies have also noticed an increased tendency to choose software that may have: demoed the best, had the most capabilities, had the lowest initial cost, or the one that was used at the last company that someone worked at. In other words, without a set of requirements there was no basis on which to make an effective comparison based on actual and anticipated need. It would be reasonable to ask why companies would choose a less rigorous process for selecting compliance solutions. Here are some of the reasons given: Our current system doesn't work and we need something else but we don't know what that looks like I don't know what I need so looking at software helps me figure that out All I want is something that is user friendly. I expect the vendor to know what my requirements are. This is off-the-shelf-software so why do I need to write down any requirements. Don't they all do the same thing. I am just looking to replace what I currently have so those are my requirements We are looking at cloud-based software and the subscription costs don't warrant a large project Our business analysts used to do that but we don't have those roles anymore I don't have the time to go through a structured process. We are following an agile approach which means we don't need to figure out what are requirements are right now Even if it the software doesn't work we can replace it easily because its all in the cloud As more organizations move their systems over to the cloud it is expected that the use of a demo-first approach will increase. Of course each company will have different levels of success, however, the probability of success can still be improved by effectively managing uncertainties specifically with respect to scope. Risk-Based Approach: Acquiring software to support critical compliance processes still requires that risks be properly addressed. The most significant source of risk hasn't changed and is still scope creep or scope gallop as it often the case. Managing scope is essential to every project and this applies to choosing compliance software. Software demonstrations can be an effective way to learn about what is available in the marketplace. This in many ways has replaced the use of RFIs. However, demos do not replace the need to specify what the software needs to do or the need to manage risk. Requirements may not be as detailed as they once were and may take the form such as user stories. At the same time, they still must be sufficient to cover what the software contractually needs to deliver and how it needs to perform in order to achieved the desired outcomes. It is always good to remember that you are not the product the software is. In addition, as previously noted, it is a good strategy to always have a second choice because your first choice is likely not the one that will achieve the desired outcomes. Whether you follow a demo-first or requirements-first approach or not you still need to get answers to the same set of questions. The timing of when you get these answers will significantly influence the success of your project. If you wait until after you purchase the software you will need to deal with the effects of not knowing or what is called, "epistemic uncertainty." The risk of not knowing can and often leads to failed projects that in many cases doubles the cost since the project has to be done over again. Here is list of items that some companies chose not to know in advance: The importance of integration with other systems and consequently neglected during the procurement phase The value associated with legacy data leading to no budget for data migration The loss of control over how processes are implemented resulting in the using vendor generic workflows The impact of using generic approaches that were sub-standard to the company's higher standards The lack of understanding of how an on-demand pricing model would be affected by a fixed operating budget The lack of understanding of how the software is going to be transitioned and rolled-out All of these could have been known in advance and addressed using a requirement-first, risk-based approach. Here is a list of things that you should know when selecting compliance technology: 1. What defines success? What are the intended outcomes for the system? What defines what done looks like? How do you measure progress towards done? What steps are critical to achieving done? What risks need to be addressed that hinder achieving done? What opportunities should be pursued to increase the likelihood of getting to done? 2. What is the purpose for the software purchase? Technology replacement? Architecture alignment? Process improvement? Improved compliance? New capabilities? Increase or decrease in scale or complexity? Cost reduction? Introduction of best practice?. Point solution or platform to support multiple solutions.? 3. What are all the requirements for the expected use of the software? System, application, process, and other functional requirements? Compliance, security, data, privacy, and sovereignty requirements? Platform, network, communication, and other technical requirements? Performance, and reliability requirements? Customization, and integration requirements? Implementation, sustaining, and end-of-life requirements? Backup and recovery requirements? 4. What strategies will be used to introduce and sustain the use of the software? Lift and Shift - Improve processes first then shift? Shift and Lift - Shift to the new software first and then improve processes? All users at once or a phased roll-out? All modules at once or a phased roll-out? Distributed or centralized support? Business owners or IT support? 4. What are the impacts and risks associated with the choice in software, implementation strategies, and sustaining activities on the business What gaps in requirements need to be addressed by customization, work-around, or additional software? What is the total cost and budget needed to sustain and use this software over its anticipated lifetime? How is compliance maintained during and after the implementation? How will changes to the software or configurations be managed and validated? What actions are needed to address uncertainty in: capabilities, cost, user acceptance. ability to meet compliance obligations, and so on? Who owns the data and will the data be monetized by the vendor? How and when will breaches in service be communicated? What is your exit strategy and when will this be triggered should you need to revert to your second choice?

  • Digital Transformation - Exploiting the Power of Digital Technology

    Digital Transformation Over the last several decades companies have invested in paper-on-glass solutions as part of their digital progression. However, what only a few companies have done is change their processes to exploit the power of their digital technology. Dr. Goldratt, developer of the Theory of Constraints, speaks to this issue directly: "Technology can bring benefit if, and only if, it diminishes a limitation. Long before the availability of technology, we developed modes of behavior (policies, measurements and rules) to help us accommodate our limitations. But what benefits will any technology bring if we neglect to change the rules?" To achieve the benefits from technology, Dr. Goldratt suggests answering the following questions: What is the power of the technology? What limitation does the technology diminish? What rules enabled us to manage this limitation? What new rules will we need? The answer to the last question is most critical. To increase your return on investment from digital transformation you must change the way you currently do things. To do otherwise will: Limit your benefits to efficiency at the expense of improving effectiveness. As an example, converting paper forms to electronic forms and routing them around electronically may improve overall process time but will not achieve the benefits available using the power of the new technology. One of the limitations that paper-based systems had was its inability to use data to adapt the process to contend with risk. This often manifested itself in having complicated processes to accommodate every situation along with the need to incorporate multiple layers of approvals. However, using digital technology, it is possible to adapt work processes and incorporate the appropriate level of approvals based on collected information to contend with different levels of risk. Risk-based Process By removing the limitation of static workflows companies can benefit from using adaptive work processes resulting in even greater efficiency but also increased effectiveness at contending with uncertainty.

  • Traditional versus Operational Approach to Compliance

    Compliance is the outcome of meeting obligations which requires compliance to be operational. Compliance operability is achieved when essential functions, behaviours, and interactions exist at levels sufficient to produce a measure of effectiveness – this defines Minimum Viable Compliance (MVC). Traditional approaches never reach MVC until the very end which is too slow and often too late to protect value creation and stay ahead of risk. The good news is there is a better way to do compliance that delivers benefits sooner, with greater certainty, and less waste. This approach is based on Lean Startup model by Eric Ries which we have adapted to the compliance domain as shown in the following diagram: Traditional versus Operational Approach to Compliance The traditional approach is based on implementing components or the parts of the compliance function starting at the bottom and advancing in capability and maturity until the last phase is reached. This is when effectiveness happens as measured against realized outcomes. This is also when effectiveness can start to improve over time. The operational approach is based on first achieving operability which is the minimum level of capability for creating outcomes - a measure of effectiveness. Advancement in capability and maturity happens across all functions, behaviours, and interactions always tied to realizing higher levels of effectiveness. This provides the maximum amount of learning with the minimum amount of cost creating less waste while delivering benefits sooner. The operational approach has improved the development of products and services particularly when contending with uncertainty and achieving outcomes are important. This is the case for all organizations under performance and outcome-based regulation.

  • Assurance is an OUTCOME not an ACTIVITY

    Assurance is not an activity that compliance does or something that can be inspected into a business. It is an outcome that is created when stakeholders have confidence that an organization is meeting all its obligations today and will continue to be meet them in the future. This confidence is necessary for assurance and ultimately for trust to exist. Assurance is an OUTCOME not an ACTIVITY That's why confidence levels are an important measure of success for all risk & compliance programs. Improving the level of confidence is therefore an important objective which often involves conducting audits to verify process outputs and validate program outcomes. However, conformance to procedures and processes, as important as that may be, are not enough to provide the necessary confidence for trust to be granted. Confidence is increased when companies take steps to make certain that promises are kept. This has more to do with improving the probability that the organization is heading in the right direction, operating between the lines, and is making progress towards its mission objectives. The best way that this is demonstrated is by having an operational compliance program to properly contend with obligation and operational risk. An effective compliance program will ensure that required capabilities and performance exist to meet all obligations today and in the future. These capabilities will include resiliency, sustainability, quality, safety, diversity, or any of the abilities that contend with the risks that matter to the organization. Measuring effectiveness of these capabilities is not something that traditional audit or assurance functions have done. However, this is what is now required to provide confidence that the business has a future. To improve the outcome of assurance the following questions need to be answered: What is the level of confidence that your organization will meet all of its obligations? What capabilities do you need to ensure that you will meet your obligations in the future? What measures can you take to make certain you can keep all your promises? What resources do you need to provide the necessary capabilities and measures? How will you evaluate your progress towards greater levels of assurance?

  • AI Assistants - Threat or Opportunity?

    AI Assistants - Blessing or Curse? The rise of Generative AI has taken the world by storm, and AI assistants are popping up all over the place, providing a new way for people to approach their work. These assistants automate repetitive and time-consuming tasks, enabling individuals to focus on more complex and creative work. However, for some, it is just an improvement in productivity, and they question whether the use of AI assistants may lead to them losing their jobs. For those starting to use AI assistants, they are indeed a blessing, providing much-needed relief for overworked employees. The improved productivity is creating needed capacity and some extra space in already full workloads. However, this is expected to be short-lived as these benefits become normalized and expected. The buffer we now experience will be consumed and used for something – the question is what? No wonder there is a fear that the widespread use of AI assistants may lead to significant job reductions. Some jobs will be redundant, while others will be expected to double their workloads. For instance, if someone used to write ten articles a week, they may now be expected to do twenty using AI assistants. So, where is the real gain for the organization apart from fewer people and perhaps marginal cost reductions? Is this the same story of bottom line rather than top-line thinking. How To Use AI Assistants To Achieve Better Outcomes The key to realizing transformational benefits of AI lies in adapting businesses to fully exploit the capabilities of these tools, without exploiting the people impacted by the technology. Dr. Eliyahu Goldratt (Father of the Theory of Constraints) believed that technology could only bring benefits if it diminished a limitation. Therefore, organizations must ask critical questions to exploit the power of AI technology: What is the power of the new technology? What limitation does the technology diminish? What rules enabled us to manage this limitation? And most importantly, what new rules will we now need? Keeping the old limitations that we had before the new technology limits the benefits we can realize. It is by removing the old rules and adopting new ones that creates transformational benefits. By providing credible answers to these questions, organizations can achieve a return on investment that is both efficient and effective, enabling their employees to focus on higher-level tasks and achieve more significant outcomes – higher returns not just lower costs. This will enable companies to move beyond the short-lived relief of AI and realize its true potential as a transformational tool. Which Path Will You Take? The use of AI will be a threat for some but an opportunity for others. If history repeats itself many organizations will adopt AI assistants, realize the efficiency gains, and pat themselves on the back for a short-term win. However, as these benefits become normalized they will soon be back to where they began. Any gains they might have realized will be lost and they will be left doing more with less except now with their new AI assistant. On the other hand, there will be others who asked the right questions, changed existing processes, and created new rules that will enable them to reap the full benefits of AI technology. They will realize compounding benefits that will accrue over time. What the future holds will depend on which path you take and your willingness to take a longer term perspective focused on improving outcomes rather than just reducing costs. Which path will you take?

  • Measures without Measures is a Waste

    When it comes to risk & compliance it is important to identify, collect, and monitor data of all kinds. However, what data should be collected and which is most useful? To answer this it is helpful to consider two principle meanings behind the word measure: Measurement - Estimate or assess the extent, quality, value, or effect of something Method - A plan or course of action taken to achieve a particular purpose The first meaning uses the word measure to refer to measurements usually tied to values and most often the counting of things: How many injuries did we have this year? How many complaints did we receive? What was the amount of green house gas emissions this year? These are the easiest to capture and are useful to provide the status or condition of a particular risk or compliance system. The second meaning of measure refers to a plan or course of action to achieve an effect or result. These measures or you could say methods take the form of controls to achieve specific risk & compliance objectives. W. Edward Deming reminds us that, “ A goal without a method is nonsense.” Similarly, for risk & compliance – methods without measurements is also nonsense. While it is essential to know the status of risk & compliance system it is also important to know the effectiveness of the measures that are keeping an organization operating between the lines and within a specified level of risk. These are most useful when assessing the performance of a risk & compliance program. Measuring the effectiveness of risk & compliance controls (i.e. measures) will help to identify if the underlying systems are capable of keeping an organization in compliance today and in the future. Measures of effectiveness and performance are some of the best predictors of organizational resiliency. Unfortunately, many organizations do not measure the effectiveness of their risk & compliance controls. Work is done but without the assurance that this work will produce the desired effect or result. These companies have measures without measures which is waste. To reduce this waste the first step is to evaluate the effectiveness of the most critical risk & compliance controls. Effectiveness will be connected with progress towards targeted outcomes and objectives. Identifying which controls are effective will form the basis for determining which should be eliminated or improved.

  • The Taxonomy of an Obligation

    When it comes to improving compliance it is important to know not only what your obligations are but also how each obligation has been designed to perform the regulation function. Knowing this will help organizations better understand what is needed to meet their obligations by understanding: The level of compliance rigour required. The level of support needed from leadership and management Controls that may need to be established Who is accountability for which part (self, industry, or government) How best to improve compliance What level of investment to make What is at stake and the level of risk Among other things All of which are derived from the obligation design. Four Obligation Designs There are four common ways that obligations are architected to regulate aspects of quality, safety, environmental and legal concerns. These can be described across the dimensions of micro-macro and means-ends parameters: Four Primary Regulatory Design Approaches Prescriptive-based (micro/means ) - rules that if followed will reduce risk. Management-based (macro/means) - processes that must be followed to manage obligations and risk. Performance-based (micro/ends) - specific measures that must be followed to achieve targeted performance targets. Outcome-based (macros/end ) - targeted outcomes that must be advanced. Obligation Taxonomy Each compliance design approach will in turn create different demands on an organization which can be discovered by considering where the regulation function is being applied to structure of the obligation: Obligation Taxonomy Outcome-based regulations specify the ends or the outcomes and not the means. The onus is on organizations and industry to determine the means, the performance criteria and the rules that should be followed. This is an example of self-regulation and where leadership is essential at all levels to advance outcomes. Performance-based regulations specify the level of performance to achieve the desired outcomes but not the means or the rules that should be followed. This is common with industry programs to achieve zero fatalities, zero emissions, incidents, breaches, and so on. Continual improvement is necessary to advance the desired outcome. In this case, industry associations act as the regulator and take on some of the leadership responsibilities. Prescriptive–based designs specify the details and does not specify performance or outcomes just the rules to follow. This the primary form of government regulation which takes on responsibility to achieve the desired outcomes. Organizations are expected to conform to the rules. Leadership is still important but perhaps less or in a different way. Following rules requires a culture of conformance rather than a culture of improvement and proactivity. Management-based designs like ISO 14000 and 19600 more generally focus on the processes by which you manage obligations. What is being regulated are the management processes not necessarily performance, or outcomes. This makes management standards applicable to all forms of regulatory designs, however, with the caveat that this only happens when organizations incorporate performance and outcome standards along side of their management systems. Leadership is essential at the program level to ensure that effectiveness is not lost in the pursuit of consistency and efficiency. Regulatory bodies and standards organizations may elect to use a combination of the four regulatory designs based on the nature of the risks they are attempting to ameliorate through regulation. Compliance analysts should be aware of this when they identify obligations and evaluate compliance risk. Obligation registers should include this information to help inform the actions for effective compliance. Related Posts: https://www.leancompliance.ca/post/an-objective-view-of-obligations

  • The Compliance Case for Sovereign AI Data Centres in Canada

    Canada's sovereign AI infrastructure is being built right now. Federal investment is flowing into domestic compute capacity. New privacy legislation is imminent. Environmental scrutiny of AI energy consumption is intensifying. AI governance frameworks are formalizing. And the compliance obligations facing data centre operators span seven distinct domains — each evolving independently, many of them overlapping in what they demand from the same operational activities. The organizations that build compliance capability into their operations from the start will have a structural advantage over those that try to retrofit parallel systems after the fact. I've prepared an executive briefing for Chief Compliance & Risk Officers and senior leaders responsible for data centre compliance and operational governance. It maps the full regulatory landscape and outlines a proven approach to managing it without the overhead of parallel compliance programs. The briefing is attached.

  • Taking Ownership: The First Step to Operational Compliance

    For decades, compliance has been one of the most reactive functions in the enterprise—more reactive than finance, operations, or even IT. While there are reasons why this is the case, this excessive reactivity has created a mission-critical gap: a dangerous vacuum where managerial accountability should exist but has been replaced with busywork. The Abdication Problem Managers, for the most part, have quietly abdicated their compliance responsibilities. They've handed them off to third-party consultants, delegated them to understaffed compliance departments, or worst of all, outsourced their thinking entirely to external auditors. When audit findings arrive (although not the only measure of effectiveness), these same managers treat them as someone else's problem to fix rather than their failure to prevent. This abdication means obligations go unowned. And unowned obligations don't get fulfilled—they get tracked, reported on, and documented, but not actually fulfilled. The organization drifts outside the lines, remains blind to emerging risks, and loses sight of its mission while everyone points to procedures that nobody truly owns. Why "Be Proactive" Doesn't Work The obvious answer seems to be: stop being reactive and start being proactive. Get ahead of issues. Anticipate problems. Be forward-thinking. If only it were that simple. Telling a reactive organization to become proactive is like telling someone who can't swim to simply start swimming better. The problem isn't their technique—it's that they haven't learned to stay afloat. You cannot be genuinely proactive about obligations you don't actually own. Ownership Comes First The path forward begins with a foundational shift: organizations must take ownership of their obligations and the risks those obligations address. Not delegated ownership. Not documented ownership. Real ownership—where specific people accept responsibility for ensuring specific promises are kept and specific hazards are controlled. This means: Managers understanding their obligations as personal commitments, not corporate procedures Leaders recognizing that compliance risk is operational risk, not a separate concern Executives accepting that audit findings represent their management failures, not their auditors' discoveries What AI Cannot Do And if you thought AI can help you with this, you will be left wanting. Here's the thing: AI cannot take ownership of your obligations. It can't even take ownership of its own outputs. AI might be able to analyze some of your compliance gaps, generate your procedures, monitor your controls, and flag your risks—assuming you even have a complete set of those. It can make compliance activities faster, cheaper, and more efficient. But it cannot look your stakeholders in the eye and promise them anything. It cannot accept accountability when things go wrong. It cannot decide what matters and what doesn't. Ownership is an irreducibly human act. It requires judgment, commitment, and the willingness to be held responsible. These aren't features that can be automated or algorithmic capabilities that can be trained. They're moral choices that only people can make. Organizations rushing to deploy AI for compliance are often doing so precisely to avoid ownership—creating yet another layer of delegation, another place to deflect accountability. "The system didn't flag it" becomes the new "the auditor didn't catch it." Until Ownership, Nothing Changes Without this ownership foundation, compliance will remain exactly as it is: reactive, fragmented, and procedural. It won't improve. It won't integrate into operations. It won't create value. Organizations will continue generating documentation that nobody reads, attending training nobody remembers, and responding to findings nobody prevents. They'll add AI tools to the stack, automate the busywork, and still fail to keep their promises because nobody has actually accepted responsibility for keeping them. The transformation to operational compliance—where obligations become capabilities and compliance creates value—cannot begin until someone looks at the organization's promises and risks and says: "These are mine. I own them." Everything else follows from that moment. Nothing meaningful happens before it. And no technology, no matter how intelligent, can say those words for you.

bottom of page