SEARCH
Find what you need
577 results found with an empty search
- Proactive vs. Predictive vs. Reactive
Predictive analytics is a topic of much discussion these days and is considered by some to be a proactive measure against safety, quality, environmental, and regulatory failure. Predictive analytics can help to prevent a total failure if controls can respond fast enough and if the failure mode is predictive in the first place. However, when uncertainty (the root cause of risk) is connected with natural variation (aleatory uncertainty) we cannot predict outcomes. Also, when uncertainty is due to a lack of knowledge (epistemic uncertainty) prediction is limited based on the strength of our models, experimentation, and the study of cause and effect. Predictive analytics is not a substitute for effective risk management. To properly contend with risk we must be proactive rather than only predictive. We need to estimate uncertainty (both aleatory and epistemic), its impacts, and the effectiveness of the controls we have put in place either to guard against failure (margins) or reduce its likelihood and severity (risk buy-down).
- What is Management of Change
Change can be a significant source of risk. That is why compliance programs include a risk-based process for managing planned changes. This process is commonly referred to in highly-regulated, high-risk industries as, Management of Change or MOC. This blog takes a look at MOC across a variety of regulations and standards that are used to help buy down risk. What is Management of Change? MOC is a critical process used to ensure that no unintended consequences occur as a result of planned changes. It is required by EMP-RMP, OSHA 1910.119, NEB, API RP 1173, CSA Z767-17, ICH, and now part of ISO 45001 Safety Standard. An effective MOC process will help to plan, implement, and manage change to prevent or mitigate unintended consequences that affect the safety of workers, public, or the environment. Although MOC processes may look different based on the industry or compliance system involved, the purpose remains the same, which is, to avoid unnecessary risk. MOC differs from change management which refers to the people side of change (Kotter, PROSCI, etc) and focuses on changing mindsets, attitudes, and behaviours needed to effect a change. This is often confused with management of change which refers to the technical side of change and focuses on risk management. However, depending on the type of change both these practices may be necessary. An MOC process provides a structured approach to capture a change, identify and mitigate risks, assess impacts (organization, procedures, behaviours, documentation, training, etc.), define work plans to effect change safely, engage stakeholders, obtain necessary approvals, and update effected documentation. By following such a process risk can be adequately ameliorated which perhaps is the most important measure of MOC effectiveness. While managing risk for individual changes is of value, companies with advanced MOC capabilities are able to measure the total level of risk proposed or currently being introduced across a facility, process, or product line. This information is used to ensure that overall risk is handled within existing risk controls. When to Use MOC The applicability of an MOC process is determined by identifying proposed changes that have the possibility of high unintended consequences. These are called differently by each standard or regulation. Here is a list of examples: covered processes covered pipeline segments high consequence areas safety critical roles or positions safety critical procedures safety critical equipment or assets and so on When changes are made to any of the above then an MOC is required. However, there is an increasing trend towards using a single MOC process to manage all changes even if not required by a given standard or regulation. This has become viable through the introduction of computer automation and adaptive workflows that can adjust the level of rigour commensurate with the level of risk. When Managing Change Hinders Innovation Innovation is necessary for growth and often requires that risks are taken. However, a common sentiment is that compliance is getting in the way of product or process innovation. The pharmaceutical sector is one of the most regulated in industrialized countries. FDA has strict requirements for verification and validation of products and services. The risks to patients are many so it makes sense to scrutinize every aspect from design to delivery of new products. Changes made during the product life-cycle can lead to re-validation and conducting more clinical trials all of which introduce delays to the introduction of the new drug or medical device. In 2005, the Quality Risk Management program ICH-Q9 was introduced to bring a risk based approach to this industry and parallels the risk-based approach introduced by the Center for Chemical Process Safety. ICH-Q9 was extended to the medical device sector by the introduction of the ISO14971 Risk management standard. These were done to partially address the question of risk management and innovation and so was welcomed by the industry and FDA. This risk based approach leverages the ICH-Q8 standard which introduced, among other things, the concept of design space . A design space establishes parameters that have been demonstrated to provide quality assurance. Once a design space is approved, changes within the design space boundaries are not considered a change from a regulatory point of view. This creates a space for innovation to occur. Replacement in Kind Now, let's consider the process sector where a similar concept to design spaces is used known as, "Replacement in Kind" or RIK. Replacement in Kind uses the idea that when changes are made to the "design basis" a Management of Change (MOC) process must be followed to manage risk. Otherwise, the change is considered a "replacement" and not a change from a regulatory point of view. In many ways, RIK has the same effect that design space has in the Pharma/Med Device sectors. They both define boundaries that allow certain changes to occur that will produce a certain design outcome. Unfortunately, one notable difference between the two approaches is how design basis is currently managed in the process sector. Design information tends not to be as controlled or managed as well as it is in the Pharma/Med Device industry. In fact, it is common in older facilities to find that the design basis for a process or equipment is no longer known and engineers and maintenance crews resort to using the manufacturer's specifications for the equipment, parts, or material substitutions. This has the effect of reducing the options and innovations that might otherwise be available. In a fashion, improving the management of design basis could allow for more innovation in the process sector. More changes could be considered as RIK without increasing risk. This would result in fewer MOCs and fewer resources being spent redoing hazard analysis, risk assessments and implementing unnecessary risk measures. What the Standards and Regulations Say For those who would like to explore the topic of MOC further, the following MOC requirements from selected standards and regulations are provided below. It is worth noting that the details of "how" to follow the guidelines are left to each organization to determine based on their business and level of risk. Title 40 CFR Part 68 – EMP RMP Program §68.75 Management of change. (a) The owner or operator shall establish and implement written procedures to manage changes (except for “replacements in kind”) to process chemicals, technology, equipment, and procedures; and, changes to stationary sources that affect a covered process. (b) The procedures shall assure that the following considerations are addressed prior to any change: The technical basis for the proposed change; Impact of change on safety and health; Modifications to operating procedures; Necessary time period for the change; and, Authorization requirements for the proposed change. (c) Employees involved in operating a process and maintenance and contract employees whose job tasks will be affected by a change in the process shall be informed of, and trained in, the change prior to start-up of the process or affected part of the process. (d) If a change covered by this paragraph results in a change in the process safety information required by §68.65 of this part, such information shall be updated accordingly. (e) If a change covered by this paragraph results in a change in the operating procedures or practices required by §68.69, such procedures or practices shall be updated accordingly. OSHA 1910.119(l) – Process Safety Management 1910.119(l) Management of change. 1910.119(l)(1) The employer shall establish and implement written procedures to manage changes (except for "replacements in kind") to process chemicals, technology, equipment, and procedures; and, changes to facilities that affect a covered process. 1910.119(l)(2) The procedures shall assure that the following considerations are addressed prior to any change: 1910.119(l)(2)(i) The technical basis for the proposed change; 1910.119(l)(2)(ii) Impact of change on safety and health; 1910.119(l)(2)(iii) Modifications to operating procedures; 1910.119(l)(2)(iv) Necessary time period for the change; and, 1910.119(l)(2)(v) Authorization requirements for the proposed change. 1910.119(l)(3) Employees involved in operating a process and maintenance and contract employees whose job tasks will be affected by a change in the process shall be informed of, and trained in, the change prior to start-up of the process or affected part of the process. 1910.119(l)(4) If a change covered by this paragraph results in a change in the process safety information required by paragraph (d) of this section, such information shall be updated accordingly. 1910.119(l)(5) If a change covered by this paragraph results in a change in the operating procedures or practices required by paragraph (f) of this section, such procedures or practices shall be updated accordingly. API Recommended Practice 1173 – Pipeline Safety Management 8.4 Management of Change (MOC) 8.4.1 General The pipeline operator shall maintain a procedure for management of change (MOC). For the MOC, the pipeline operator shall identify the potential risks associated with the change and any required approvals prior to the introduction of such changes. 8.4.2 Types of Changes: The type of changes that MOC address shall include: Technical, Physical, Procedural, and Organizational. Changes to the system shall include permanent or temporary. The process shall incorporate planning for each of these situations and consider the unique circumstances of each. 8.4.3 Elements of MOC Process: A MOC process shall include the following: Reason for change, Authority of approving changes, Analysis of implications Acquisitions of required work permits, Documentation (of change process and the outcome of the changes), Communication of changes to affected parties, Time limitations, Qualification and training of staff. CSA Z767-17 7.2 Management of change 7.2.1 The PSM system shall include a MOC system. The primary focus of MOC shall be to manage risks related to design changes and modifications to equipment, procedures, and organization. The MOC system shall: a) define what constitutes a change (such as temporary, emergency) and what constitutes replacement in kind which is not subject to MOC; b) include changes in and deviations from operating procedures or safe operating limits; c) include changes in organizational structure and staffing levels; d) define the review processes and thresholds for approval of changes, based on scope or magnitude of the change; e) require an assessment of hazards and risks associated with the change consistent with Clause 6.3; f) ensure that the change is communicated to affected stakeholders prior to the change, and that any required training is provided before the change is implemented; g) provide procedures for emergency changes including a means to contact appropriate personnel if a change is needed on short notice; and h) define the documentation requirements (such as a description of the proposed change, the authorization for the change, the training requirements, the updated drawings, and the verification that the change was completed as designed). ICH Pharmaceutical Quality System Q10 The change management system ensures continual improvement is undertaken in a timely and effective manner. It should provide a high degree of assurance there are no unintended consequences of the change. The change management system should include the following, as appropriate for the stage of the lifecycle: (a) Quality risk management should be utilised to evaluate proposed changes. The level of effort and formality of the evaluation should be commensurate with the level of risk; (b) Proposed changes should be evaluated relative to the marketing authorisation, including design space, where established, and/or current product and process understanding. There should be an assessment to determine whether a change to the regulatory filing is required under regional requirements. As stated in ICH Q8, working within the design space is not considered a change (from a regulatory filing perspective). However, from a pharmaceutical quality system standpoint, all changes should be evaluated by a company’s change management system; (c) Proposed changes should be evaluated by expert teams contributing the appropriate expertise and knowledge from relevant areas (e.g., Pharmaceutical Development, Manufacturing, Quality, Regulatory Affairs and Medical), to ensure the change is technically justified. Prospective evaluation criteria for a proposed change should be set; (d) After implementation, an evaluation of the change should be undertaken to confirm the change objectives were achieved and that there was no deleterious impact on product quality.
- Five Theories That Will Transform Your Compliance
In the world of ethical, regulatory, and stakeholder obligations, understanding the underlying theories that drive compliance is key to achieving both compliance and mission success. Compliance isn't just about following rules; it's about employing strategic principles that not only ensure adherence but also deliver the benefits from always staying between the lines and head of risk. In this article, we will delve into the power of Management Theory (ISO 37301), Promise Theory, Systems Theory, Risk Theory, and Lean Management Theory, exploring how these theories when put into practice can elevate your compliance game. Management Theory (ISO 37301): The Blueprint for Compliance Excellence ISO 37301 (Compliance Management System) standard is rooted in management theory and serves as a comprehensive guide to how to effectively manage obligations. It goes beyond mere rule-following and focuses on proactive strategies for meeting obligations efficiently. Key Takeaway: ISO 37301 provides a structured approach to compliance, emphasizing the importance of proactive planning and performance. Promise Theory: A Culture of Trust through Compliance Promise Theory, introduced by computer scientist Mark Burgess, emphasizes that compliance is not merely a checklist; it's a collection of promises (policies) made to stakeholders. When these promises align with obligations, compliance becomes a part of an organization's culture. Key Takeaway: Promise Theory transforms compliance into a living culture of trust, where commitments to stakeholders are honoured and upheld. Systems Theory: Compliance as an Interconnected Symphony Systems Theory underscores that compliance is not achieved in isolation. Instead, it's a symphony of interconnected components and processes within an organization that must work together seamlessly. Compliance is more than the sum of its parts. Key Takeaway: Systems Theory highlights that Minimum Viable Compliance (MVC) is achieved when essential functions, behaviours and interactions are performing together at levels sufficient to produce compliance outcomes. Risk Theory: Navigating Compliance in Uncertain Waters Risk Theory acknowledges that compliance is not just about meeting expectations under ideal conditions. It recognizes that businesses must be resilient and adaptable in the face of uncertainty and risk. Key Takeaway: Risk Theory encourages organizations to build effective risk measures to improve the probability that compliance outcomes will be achieved in the presence of uncertainty. Lean Theory: Efficiency and Continuous Improvement Lean Management is a philosophy that focuses on efficiency, waste reduction, and continuous improvement. When applied to compliance, it streamlines processes and eliminates inefficiencies. Key Takeaway: Lean Management principles can be harnessed to optimize compliance processes, making them more efficient and adaptable. This frees up resources to be more proactive with compliance delivering compounding benefits over time. Harnessing the Power Understanding the theories behind compliance is crucial for success in ethical and regulatory matters. This article explored five powerful theories: Management Theory (ISO 37301), Promise Theory, Systems Theory, Risk Theory, and Lean Theory, and their potential to transform compliance. ISO 37301 offers a structured approach, emphasizing proactive planning. Promise Theory fosters a culture of trust by aligning commitments with obligations. Systems Theory stresses the interconnected nature of compliance components. Risk Theory focuses on resilience and adaptability. Lean Management improves efficiency. In summary, compliance is about more than just rules; it's about using these theories to thrive in a competitive business world. Applying them can help navigate uncertainty, build trust, streamline processes, and achieve compliance excellence improving the probability of long term mission success.
- Moving Compliance to the Performance Zone
Compliance is often viewed as a cost of doing business. However, instead of viewing compliance only as an expense and something to reduce, what if it was seen as part of the overall business value proposition. In Geoffrey A. Moore's book entitled, " Zone to Win" he introduces a framework for understanding how change in the form of disruption can be introduced and managed to help organizations successfully compete. This is a very useful model not only to understand how companies can best prioritize their efforts but also to understand where and how compliance fits in. The Four Zones Moore describes 4 zones that define different areas of the business each having distinct goals, focus, and attention during disruption. The following diagram presents these zones with compliance added in RED : Performance Zone – The focus of this zone is to execute the business model and creating revenue. Productivity Zone – This zone focuses on efficiency, effectiveness and meeting compliance. This is the home of shared services, programs, systems and where cost is managed. Incubation Zone – This zone is looking 3-5 years out to position the company to catch the next wave of growth. Transformation Zone – this is where a disruptive business model scales and is introduced to the performance zone. The productivity zone according to Moore is responsible for delivering the following value propositions: Regulatory compliance – meeting obligations Improved efficiency – doing things right Improved effectiveness – doing the right things This is also the primary place where LEAN is applied and where compliance improvements are made. Moving Compliance to the Performance Zone While compliance is enabled by the productivity zone it is manifested in the performance zone. This creates a number of tensions including that between production and compliance objectives such as: safety, quality, environmental, regulatory, and so on. When you view compliance only as a cost you want to spend as little on it as possible. This can often lead to reducing the effort altogether instead of investing in compliance maturity. When it comes to safety, quality, and regulatory compliance this can create significant risk. Moore is correct in saying that as is the case of quality you cannot inspect compliance in; you have to design it in. I would argue that you need to go further and say that you don't have a business without compliance. Therefore, it is not simply a choice between whether to inspect or design in quality; you need to do more. LEAN talks about value in terms of activity that directly contributes to building the product the customer is purchasing. For example, inspections are seen as necessary but not value added. If the product is built correctly in the first place you would not need to do inspections. The customer does not want to pay for the cost of rework. Now, this line of thinking can (inappropriately) also be made regarding safety. If only workers acted in a safe manner we would not need safety systems. Safety systems only exist because of unsafe behaviors and the customer should not have to pay for that. The question that is really being asked is, "what is the value of compliance and is that something that customers are willing to pay for?" This same question was asked during the early days of quality. We now know the answer: quality adds value, reduces cost, and is something that customers are willing to pay for. This is now were compliance is at. Compliance is more than a cost or just necessary to obtain a regulatory license. Compliance contributes directly to and is part of a company's: business value proposition, social license to operate (legitimacy, credibility, trust), regulatory license to operate (quality, safety, environmental, integrity), and customers will not only pay for it; they will demand it. It's time to make compliance a full citizen of the performance zone and not just a visitor.
- Cybersecurity Risk: An Overview of Annual Loss Expectancy (ALE )
Cybersecurity is a constantly evolving field, with new threats emerging every day. As such, it is essential for organizations to take a proactive approach to managing cybersecurity risks. The Annual Loss Expectancy (ALE) formula is a crucial tool in this process. In this article, we will explore the history of ALE, provide examples of its application, and explain how it is used to evaluate cybersecurity risks for inherent and treated risks and their effects. History of ALE The history of ALE dates back to the 1970s, when it was first introduced in the field of insurance. ALE was used to calculate the potential financial losses associated with property damage or loss due to natural disasters, theft, or other unexpected events. Over time, ALE was adapted for use in cybersecurity risk management. Today, ALE is widely used in the cybersecurity industry as a standard method for evaluating the financial impact of cyber threats. The formula for calculating ALE is relatively simple, but the data required to input into the formula can be complex. How is ALE Calculated? ALE is a risk management formula used to calculate the expected monetary loss from a security incident over a year. The formula is calculated by multiplying the Annual Rate of Occurrence (ARO) with the Single Loss Expectancy (SLE). ARO is the estimated number of times a security incident is expected to occur in a year, and SLE is the estimated monetary value of a single incident. ALE = ARO x SLE For example, if a business estimates that it will experience a security breach once a year, and the cost of the breach is estimated to be $50,000, then the ALE would be: ALE = 1 x $50,000 = $50,000 This means that the business can expect to lose $50,000 per year from this particular security incident. How is ALE used to Manage Risk? ALE is a critical tool in managing cybersecurity risks. The ALE formula can be used to calculate both inherent and treated cybersecurity risks. Inherent risk refers to the level of risk that exists without any mitigating controls in place, while treated risk refers to the level of risk that remains after implementing mitigating controls. This information can then be used to prioritize risk effort. To illustrate the use of ALE in cybersecurity risk management, consider the following table: Risk ARO SLE Inherent Risk ALE Treated RIsk ALE Effect of Treatment Phishing 1 in 100 $10,000 $100 $10 90% reduction Ransomware 1 in 500 $50,000 $100 $10 90% reduction Inside Threat 1 in 1,000 $100,000 $100 $20 80% reduction Advanced Persistent Threat 1 in 10,000 $1,000,000 $100 $50 95% reduction In this scenario, a company has a database containing sensitive information that is accessible to all employees. Inherent risk is calculated by determining the potential financial loss if an attacker gains access to the database. If the estimated (example highlighted in yellow) SLE is $100,000 and the ARO is 1 in 1,000, then the inherent risk ALE would be $100. Treated risk, on the other hand, takes into account the effectiveness of mitigating controls. Suppose the company implements access controls to restrict access to the database to only authorized personnel. The treated risk ALE would be recalculated using the same ARO but a lower SLE. If the estimated SLE is now $20,000, then the treated risk ALE would be $20. The effect of treatment column shows the percentage reduction in ALE after implementing mitigative controls. Using ALE to Prioritize Risk Management Efforts By using ALE, organizations can identify potential financial losses, prioritize their cybersecurity efforts, and allocate resources more effectively. ALE can be used to compare different risks and determine which risks are the most significant and which ones require immediate attention. The risks with the highest ALE values are the ones that pose the greatest financial threat to the organization and require the most attention. Based on the previous example, the organization can see that the APT risk poses the greatest financial threat, with an inherent risk ALE of $100 and a treated risk ALE of $50. The organization should prioritize their efforts on mitigating this risk, such as implementing advanced security measures and training employees on how to identify and report suspicious activity. Mitigating controls, such as data loss prevention programs, access and identity management, and cyber safety training, can significantly reduce the SLE and the ALE. The cost and effectiveness of the countermeasures should be factored into the evaluation of treated risk. It is crucial to ensure that the cost of implementing the countermeasures does not exceed the potential financial loss. Organizations must also consider the potential impact on business operations and the overall risk management strategy. Conclusion ALE is a crucial tool in managing cybersecurity risks. It enables organizations to identify potential financial losses, prioritize their cybersecurity efforts, and allocate resources more effectively. ALE is calculated by multiplying the ARO by the SLE and can be used to evaluate both inherent and treated cybersecurity risks. Mitigating controls, such as anti-virus software or employee training, can significantly reduce the SLE and the ALE. However, organizations must also consider the potential impact on business operations and the overall risk management strategy. By using ALE, organizations can take a proactive approach to managing cybersecurity risks, reducing the likelihood of security incidents, and minimizing the potential financial losses associated with such incidents. While no security measure can guarantee complete protection against cyber threats, ALE provides a useful framework for evaluating risks and making informed decisions to best direct risk efforts. Cybersecurity and Infrastructure Security Agency (CISA). (2021). Cybersecurity Framework. Retrieved from https://www.cisa.gov/cybersecurity-framework Federal Financial Institutions Examination Council (FFIEC). (2019). Information Security Booklet. Retrieved from https://www.ffiec.gov/press/pr011719.htm Information Technology Laboratory. (2012). NIST SP 800-30 Rev. 1 Guide for Conducting Risk Assessments. Retrieved from https://csrc.nist.gov/publications/detail/sp/800-30/rev-1/final ISACA. (2012). Risk IT Framework. Retrieved from https://www.isaca.org/resources/risk-it-framework National Institute of Standards and Technology (NIST). (2020). Guide for Conducting Risk Assessments. Retrieved from https://csrc.nist.gov/publications/detail/sp/800-30/rev-1/final United States Department of Defense (DoD). (2014). Risk Management Guide for DoD Acquisition (6th ed.). Retrieved from https://www.acq.osd.mil/se/docs/Risk_Management_Guide_for_DoD_Acquisition_6th_Edition.pdf
- Mapping KPI, KRI, and KCI to the Bowtie Risk Model
A Guide to Evaluating Risk Performance and Effectiveness Introduction To proactively contend with risks associated with meeting obligations, companies rely on Key Performance Indicators (KPIs), Key Risk Indicators (KRIs), and Key Control Indicators (KCIs). Integrating these essential metrics into the Bowtie Risk Model offers a powerful framework for evaluating their performance and effectiveness. This article will delve into the process of mapping KPIs, KRIs, and KCIs to the Bowtie Risk Model to optimize risk management strategies and enhance overall performance. Understanding the Bowtie Risk Model The Bowtie Risk Model is a visual and qualitative risk analysis tool that provides a clear and comprehensive representation of risk scenarios. It consists of several key components: Hazard : The potential source of harm or adverse event that may lead to unwanted consequences. Threats : Specific events or circumstances that can trigger the hazard and escalate the risk. Top Event : The central risk event that occurs when the hazard is triggered by a threat. Consequences: The potential outcomes and impacts resulting from the top event. Preventative Barriers : Measures in place to prevent the hazard from being triggered. Mitigative Barriers : Measures aimed at reducing the severity of consequences if the top event occurs. Mapping KPI, KRI, and KCI to the Bowtie Risk Model Identify Relevant Metrics : Start by identifying the most relevant KPIs, KRIs, and KCIs for the specific risk scenario. These indicators should align with the organization's objectives, risk appetite, and regulatory requirements. Align KPIs with Consequences : Map the KPIs to the potential consequences of the top event. For example, if one of the consequences is a financial loss, the relevant financial KPIs could include impacts to revenue growth, cost control, or profitability. Map KRIs to Threats : Associate the KRIs with the identified threats in the Bowtie Risk Model. KRIs should act as early warning signals to detect potential threats before they escalate into top events. For instance, if one of the threats is a cybersecurity breach, relevant KRIs could include the number of unauthorized access attempts or malware detection rate. Connect KCIs to Barriers : Link the KCIs to the preventative and mitigative barriers in the Bowtie Risk Model. KCIs serve as indicators of the effectiveness of the control measures put in place to prevent and mitigate risks. If one of the preventative barriers involves employee training, relevant KCIs could include the percentage of employees who have completed the training or the number of observed near misses. Evaluating Performance and Effectiveness Once the mapping of KPIs, KRIs, and KCIs to the Bowtie Risk Model is complete, organizations can evaluate their performance and effectiveness in risk management through the following steps: Data Collection : Gather relevant data for each indicator from various sources such as performance reports, risk assessments, incident logs, and compliance audits. Data Analysis : Analyze the collected data to assess the performance of KPIs, the trends in KRIs, and the effectiveness of KCIs in meeting the objectives and mitigating risks. Benchmarking : Compare the performance of KPIs, KRIs, and KCIs against established benchmarks or industry standards to gain insights into how well the organization is managing risks relative to its peers. Continuous Improvement: Identify areas where KPIs, KRIs, and KCIs fall short of expectations and use this information to develop targeted improvement strategies. Regularly update and refine the Bowtie Risk Model and associated indicators to stay aligned with changing business conditions and risk profiles. Conclusion Integrating Key Performance Indicators (KPIs), Key Risk Indicators (KRIs), and Key Control Indicators (KCIs) into the Bowtie Risk Model presents organizations with a robust framework to evaluate risk management performance and effectiveness. By mapping these indicators to the relevant components of the Bowtie model, companies can gain valuable insights into their risk landscape, identify potential weaknesses, and enhance their risk management strategies. This proactive approach ensures that organizations are well-prepared to navigate uncertainties, minimize threats, and achieve sustainable success in an ever-evolving business environment.
- Using LEAN 5M+E to Discover Probable Causes
A simple yet powerful way to discover causes to process problems is using the LEAN 5M+E (or 6M Six Sigma) analysis technique. This approach considers 6 categories that can contribute to problems: Man (human related issues) Machine (computer related) Materials (documents, drawings, standards, specifications) Methods (techniques, approaches, procedures) Measurement (data, units, metrics, KPIs) Environment (mother nature) A combination of using a Fishbone Diagram and 5 Whys can be used to brainstorm through these categories to produce a list of possible causes for the effect (problem) . To get started, a process walk through using the 5M+E as a guide can produce a preliminary list that can serve as a road map for further investigation and process improvement. An Example In highly regulated, high-risk industries organizations must use a Management of Change (MOC) process to assess and mitigate risk due to planned changes. This process typically follows a multi-step procedure which often bottlenecks during the design activities when engineers look to solutions to effect change while maintaining design integrity of the process, assets, and the plant: The following chart is an example of a process review using the 5M+E model looking into why there is a bottleneck in engineering when processing design changes for MOCs (Management of Change): Using this approach, probable causes were identified which were addressed by the process improvement team to reduce the bottleneck. What to look out for What is important and often most difficult when solving problems is coming up with a good problem statement. As some have said, A problem well stated is half solved. In addition, the better the problem is described the better the solutions. Defining the problem with those who are experiencing the problem also yields better results. Those who are most familiar with the process will know best what the issues are and have ideas on how they can be improved. Here are a few questions you might consider when using 5M+E in your process analysis: What processes could benefit from using this technique? Which set of problems generate the greatest impact (you may need to do a Pareto Analysis)? What steps can you take today to start using this approach in your organization?
- Ethical Decision Making Involving AI
In the domain of Corporate Ethics and Compliance, one hammer is used to establish ethical behaviour – training. In many cases, this training consists of the transfer of knowledge with respect to rules, regulations, and code of conduct. While this is important, it will not be enough to handle concerns associated with Artificial Intelligence (AI) systems and its use. What is missing from many organizations are the skills to make ethical decisions which have more to do with values and meaning rather than measurements and metrics – qualitative rather than quantitative decision making. These skills are no longer valued by some in favour of algorithmic and numerical decision making otherwise known as machine-based decisions. How will organizations keep humans in-the-loop when the loop no longer involves human decision making? This is where compliance comes in. Compliance at its core is an ethical endeavour to stay between the lines and ahead of risk. When decisions are made to proceed with a course of action (for example, to use or not use AI, in the presence of uncertainty and the possibility of loss or harm, we are making an ethical choice. The commitments that are then made reflect the values we have prioritized. Unfortunately, all too often, top management goes straight from setting a course of action to specifying boots-on-the-ground tasks. It's no wonder why corporate and ethical compliance are struggling. Ethical dilemmas are not considered, risk is not evaluated, and operational capabilities are not adjusted to stay between the lines and ahead of risk. Frankly, the message now seems to be, "attend the training and do the tasks." In the not too recent past, middle management used to do the translation work between top management and boots-on-the-ground activity. But no longer as many have removed these roles to flatten their organizations. The skill of making ethical decisions now resides with those who directly own the obligations and compliance teams. That's why I believe our upcoming micro program, "Ethical Decision Making Involving AI" is so important. You will learn how to make ethical choices supporting responsible and safe AI along with your other compliance obligations. Can you help us out? I am looking for compliance practitioners who are interested in trying out this micro program (2 hours / week for four weeks) and provide us with feedback. The number will be limited. Message me to let me know if you would like to participate ( ray.laqua@leancompliance.ca )
- Compliance as a Value Guardrail
Organizations today face increasing pressure to deliver value while navigating a myriad of regulations, stakeholder expectations, and ethical considerations. The concept of "value guardrails" is a powerful paradigm shift, transforming the perception of compliance programs from mere cost centres to essential guardrails ensuring and protecting sustainable value creation. Traditionally, compliance programs were viewed as necessary evils—administrative hurdles that companies had to clear to avoid penalties and legal issues. However, forward-looking organizations have begun to recognize that well-designed compliance initiatives can serve as strategic assets, functioning as critical guardrails that protect and enhance total value creation. Compliance as a Value Guardrail When implemented effectively, compliance programs across various domains—including safety, security, sustainability, quality, ethics, regulatory adherence, and ESG (Environmental, Social, and Governance)—function as a comprehensive system of value guardrails. These guardrails not only ameliorate risk but also help to maintain integrity and alignment with organizational obligations and commitments. For example: Risk Mitigation and Cost Avoidance - at its core, compliance helps organizations avoid costly pitfalls. By preventing safety incidents, data breaches, quality defects, and regulatory violations, companies can sidestep significant financial losses, reputational damage, and operational disruptions. Enhanced Operational Efficiency - well-designed compliance processes often lead to streamlined operations. For instance, quality management systems can reduce waste and rework, while cybersecurity protocols can minimize downtime and data loss Stakeholder Trust and Brand Value - demonstrating a strong commitment to compliance across various domains builds trust with customers, investors, employees, and regulators. This trust translates into brand value, customer loyalty, and easier access to capital. Innovation Catalyst - contrary to popular belief, compliance can drive innovation. Environmental regulations, for example, have spurred the development of cleaner technologies and more sustainable business models. Market Access and Competitive Advantage - robust compliance programs can open doors to new markets and partnerships. In an era of complex global supply chains, companies with strong ethical and quality standards often gain preferential status as suppliers or partners. Implementing Value Guardrails To fully leverage compliance as an effective value guardrail, organizations should consider the following approaches: Integrate compliance into business strategy: elevate compliance from a siloed function to a core capability of business strategy and decision-making processes. Foster a culture of proactive compliance : encourage employees at all levels to view compliance as an enabler of success rather than a hindrance. Leverage technology: implement advanced analytics, AI, and automation to enhance the efficiency and effectiveness of compliance programs. Measure and communicate value : develop metrics that demonstrate the tangible and intangible benefits of compliance initiatives (measures of effectiveness). Continually improve: constantly adapt compliance programs, systems, and controls to align with evolving business needs and external requirements. Organizations that view compliance programs as strategic value guardrails—protecting against downside risks while enabling sustainable growth—are better positioned to thrive in the long term. By re-framing compliance as a value guardrail rather than a cost centre, companies can unlock new opportunities, build resilience, and create lasting value for all stakeholders. Here are a few questions to help plan your adoption of value guardrails: What organizational values and outcomes need to protected and ensured for mission success? How effective do your compliance programs protect and enhance value creation? Where are the gaps in your value guardrails and how should they be addressed? What steps can you take for compliance to always keep you between the lines and ahead of risk?
- The Effects of a Divided Brain on Risk and Compliance
This week I came across a LinkedIn post that suggested that CISOs (Chief Information and Security Officers) often find themselves at a crossroads between innovation and gate-keeping. On one hand, they are expected to champion innovation, integrating cutting-edge technologies that can propel organizations forward. On the other hand, they are the gatekeepers of caution, responsible for mitigating risks and ensuring that the security architecture is not compromised. This is an important observation that applies to many other risk and compliance domains. However, I am not sure what is being observed is a “crossroad.” Instead, I believe we are observing the new reality for organizations, specifically, the need for whole brain thinking and operations. Two Brain Hemispheres Iain McGilchrist writes about the impact of a divided brain in his book, “The Master and His Emissary: The Divided Brain and the Making of the Western World.” Iain McGilchrist argues that the human brain is divided into two hemispheres with distinct functions and tendencies. This division, he believes, is crucial to understanding human nature and the challenges of modern society. Right Hemisphere: Often referred to as the "Master," this hemisphere is attuned to the big picture. It's associated with intuition, creativity, empathy, and our connection to the world around us. It's the part of the brain that helps us understand context, relationships, and the nuances of human experience. Left Hemisphere: Often called the "Emissary," this hemisphere is focused on details, logic, and analysis. It's responsible for language, mathematics, and the development of tools and technology. It's essential for breaking down complex problems into manageable parts. McGilchrist contends that Western society (and I will add business in particular) has become overly reliant on the left hemisphere, leading to an imbalance. This overemphasis on logic, analysis, and control has resulted in a fragmented, dehumanized world – a world of algorithms and machine-based decisions. While the left hemisphere is crucial for progress, its dominance has overshadowed the wisdom and intuition of the right hemisphere. In essence, McGilchrist's work calls for a more balanced approach, recognizing the value of both hemispheres and finding ways to integrate their strengths. By understanding the differences between the two halves of our brain, we can gain deeper insights into ourselves and the world around us. The crossroads, that CISO’s and others are experiencing, may in fact not be a call to decide between innovation and gate-keeping, but rather the need to bring these two aspects together for the benefit of the whole. Two Modes of Operations Geoffrey Moore's book, “Zone to Win” while not written to address the divided brain, provides a useful model and operational approach applicable to this situation. In his book, Moore makes the argument that to succeed businesses need different zones, each having different purposes, behaviours, and goals. They have their own operating system and culture, or better said – mode of operation. A significant challenge for CISOs (along with other C-Suite roles) is that they often have more than one zone of operation within their mandate. These are often structured functionally, with a large span of control, and managed using the same behaviours and practices – and therein lies the rub. With respect to behaviours, some will be more reactive to contend with deviations, exceptions, and non-conformance. However, others will be proactive to anticipate, plan, and act to respond to new threats and opportunities. The reactive side tends to be more reductive focused on the parts, whereas, the proactive side will tend to be integrative, focused on the whole. Geoffrey Moore's concept of business zones aligns closely with McGilchrist's hemispheric model. The reactive, detail-oriented approach required in some business zones mirrors the left hemisphere's focus on analysis and control. Conversely, the proactive, strategic mindset needed for other zones resonates with the right hemisphere's capacity for synthesis and innovation. The challenge for organizations, particularly in roles like the CISO, is to effectively balance these two modes of operation, often within a single function. This necessitates a deeper understanding of how the brain works and how it applies to organizational design. Two Types of Risk McGilchrist's two hemisphere model also helps to understand how we contend with threats and opportunities. Risk as Threat: A Left-Brain Perspective Threats are typically associated with negative outcomes, potential losses, or dangers. They often involve clear and defined risks that can be analyzed and quantified. The left hemisphere, according to McGilchrist, is analytical, logical, and focused on details. It excels at identifying patterns, calculating probabilities, and developing strategies to mitigate threats. For instance, a financial analyst using data to predict market downturns is primarily employing left-brain functions. Risk as Opportunity: A Right-Brain Perspective Opportunities are associated with potential gains, growth, or positive outcomes. They often involve ambiguity and require a broader, holistic view to recognize. The right hemisphere is more intuitive, creative, and focused on the big picture. It excels at recognizing patterns, understanding context, and envisioning possibilities. An entrepreneur spotting a new market trend is primarily using right-brain functions. While the two hemispheres are often described as separate, they are interconnected and work together. In essence, understanding the different strengths of the left and right brain can provide valuable insights into how we perceive and respond to risk. What is important to understand is that protecting against loss is different than pursuing gains. They each will have different cultures, behaviours, and methods. By harnessing the capabilities associated with threats along with opportunities, individuals and organizations can develop more comprehensive and effective risk management strategies. Two Management Capabilities The left and right brain model also sheds light on two management capabilities that are often confused but critical to meeting the breadth of obligations spanning rules, practices, targets, and outcomes. These capabilities are known as: Management Systems and Management Programs . Management Systems When it comes to operational risk – the uncertainty of meeting goals and objectives – we need systems and controls that make things more certain. These systems need to be consistent, reliable and maintain state by removing variability through feedback and control loops to correct for exceptions and deviations from the norm (expected behaviour). We don't want innovation in the operation of these systems. Instead, we want conformance to standards and predictable performance. These systems are best described as closed-loop systems and are often called, “Management Systems.” Management Programs However, we also need to contend with emerging and new threats and opportunities. This requires introducing change to adapt to variations in the conditions by which an organization operates or the actions they are engaged in. Here we need openness and innovation to adapt existing systems and processes to respond, for example, to expanded attack surfaces and threats. This requires exploration and discovery along with alignment and accountability – a prerequisite for proactive behavior. This kind of system changes state and are better characterized as open-loop systems often referred to as, “Management Programs.” McGilchrist's model of the divided brain offers a compelling lens through which to view these management functions. The analytical, detail-oriented left hemisphere aligns with the structured, controlled nature of management systems. These systems thrive on consistency, predictability, and a focus on maintaining conformance to rules and practice standards. Conversely, the intuitive, creative right hemisphere resonates with the dynamic, adaptive nature of management programs. These programs necessitate exploration, innovation, and a capacity to navigate uncertainty. By recognizing the distinct roles that both hemispheres play in management, organizations can optimize their approaches. Again, this is not a crossroad but the need to maintain stability and steer towards targeted outcomes. Towards Balanced Brain Operations C-Suite roles face a complex balancing act between fostering innovation and mitigating risk. On one hand, they are expected to champion cutting-edge technologies that drive organizational advancement. On the other, their role demands a vigilant focus on uncertainty and risk management. This tension can be understood through the lens of Iain McGilchrist's theory of the divided brain. The analytical, detail-oriented left hemisphere aligns with risk management responsibilities, while the creative, big-picture perspective of the right hemisphere is crucial for innovation. To effectively navigate this challenge, C-Suite roles benefit from two management capabilities. Management systems, driven by the left hemisphere, focus on control and risk mitigation. In contrast, management programs, aligned with the right hemisphere, emphasize innovation and adaptation. By understanding and leveraging both hemispheres, organizations can optimize their strategies to improve the probability of mission success.
- AI Governance, Guardrails and Lampposts
At today's monthly "Elevate Compliance Webinar" participants learned strategies and methods for effectively governing artificial intelligence (AI) in organizations, particularly within the context of compliance and risk management. Below is a summary of the key points that were covered: 1. Introduction and Context: The rise of AI, particularly since the introduction of ChatGPT in 2022, has brought both tremendous opportunities and risks to organizations. It is disrupting industries at a rapid pace, similar to how the internet once did. Governance in the AI era requires more than traditional oversight; it requires proactive measures like "guardrails" (preventing harm) and "lampposts" (highlighting risks). 2. Why AI Is Different: AI presents unique risks because of its ability to operate with minimal human oversight, learn from data, and make autonomous decisions. AI's rapid evolution means that many organizations are unprepared to govern it effectively, leading to a need for better tools and strategies. 3. Challenges with AI Regulation: While regulations like the EU AI Act are emerging, they are still new and untested. Moreover, they are unlikely to harmonize globally, which will make governance more complex. Organizations cannot rely solely on external regulation but must develop internal governance frameworks. 4. Methods of AI Governance: Governance must balance two types of terrains: order (predictability) and chaos (uncertainty). AI belongs more in the realm of chaos, where traditional policies and principles (suited for order) may not suffice. AI governance should incorporate guardrails (e.g., safety and security protocols) and lampposts (e.g., transparency and fairness measures) to navigate uncertainty. 5. A Program to Govern AI: A comprehensive AI governance program should include four elements: AI Code of Ethics: Guiding ethical principles and clear guidelines for AI development. Responsible AI Program: Ensuring AI systems are used ethically, transparently, and fairly, with proper risk management and stakeholder engagement. AI Design Standards: Technical guidelines for AI development, emphasizing ethical considerations. AI Safety Policies: Measures to prevent harm and ensure robust testing and monitoring of AI systems. 6. Conclusion: AI governance is about keeping organizations "on mission, between the lines, and ahead of risk." This requires more than reactive compliance; it demands proactive governance methods tailored to the uncertainties of AI technology. In summary, organizations need a structured, proactive approach to AI governance, integrating policies, ethical codes, safety standards, and continuous oversight to mitigate risks and ensure compliance in a rapidly evolving landscape.
- Toasters on Trial: The Slippery Slope of Crediting AI for Discoveries
In recent days, a thought-provoking statement was made suggesting that artificial intelligence (AI) should receive recognition for discoveries it helps to facilitate. This comment has sparked an interesting debate, highlighting a significant contradiction in how we view technology's role in society. On one side of the argument, many argue that technology, including AI, should not be held responsible for its consequences or how humans choose to utilize it. This perspective is often illustrated by the "gun metaphor" - the idea that guns themselves do not kill people, but rather people kill people using guns. This analogy suggests that tools and technology are morally neutral, and the responsibility for their use lies solely with human users. On the other hand, we now see some individuals proposing that AI should be credited for the discoveries it contributes to, particularly when these discoveries have positive outcomes. This stance attributes a level of agency and merit to AI systems that goes beyond viewing them as mere tools. However, this raises an important question: can we logically maintain both of these positions simultaneously? If we accept that AI should receive credit for positive outcomes, it follows that we must also hold it accountable for negative consequences. This perspective would effectively personify technology, turning our machines into entities capable of both heroic and criminal acts. Taking this logic to its extreme, we might find ourselves in a future where we attempt to assign blame to everyday appliances for their perceived failures. For instance, we could see people trying to sue their toasters for burning their bread before the end of this decade. This scenario, while seemingly absurd, illustrates the potential pitfalls of attributing too much agency to our technological creations. It underscores the need for a nuanced and consistent approach to how we view the role of AI and other technologies in our society, particularly as they become increasingly sophisticated and integrated into our daily lives. Recommendation: Establish an AI Ethics Committee For organizations to get ahead of these issues we recommend they create a cross-functional AI Ethics Committee to oversee the ethical implications of AI use within the organization. This committee should: Evaluate AI projects and applications for potential ethical risks Develop and maintain ethical guidelines for AI development and deployment Provide guidance on complex AI-related ethical dilemmas Monitor emerging AI regulations and industry best practice. Collaborate with legal and compliance teams to ensure AI use aligns with regulatory requirements Conduct regular audits of AI systems to identify and mitigate bias or other ethical concerns Advise on transparency and explainability measures for AI-driven decisions Foster a culture of responsible AI use throughout the organization Lean Compliance now provides an online program designed to teach decision-makers how to make ethical decisions related to AI. This advanced course integrates the PLUS model for ethical decision-making. You can learn more about this program here .











