top of page

SEARCH

Find what you need

155 results found for "AI"

  • Paper Policies are Not Enough

    Why do we think that paper AI policies will be enough to handle AI risk? With AI’s ability to learn and adapt, we need measures that are also able to learn and adapt. That’s why a static, paper-based policy will never be enough to govern (i.e. regulate) the use of AI. Governance – the means of regulation – must be as capable as AI.

  • Compliance Must Be Intelligent

    AI Safety Labels There is an idea floating around the internet and within some regulatory bodies that we should apply safety labels to AI systems, akin to pharmaceutical prescriptions. While well intended this is misguided for a variety of reasons, namely AI’s adaptive nature.   In contrast, AI systems represent a new paradigm of adaptive intelligence. An AI Regulatory Framework: Intelligent Compliance Laws of AI Regulation for Compliance Cybernetics pioneer

  • Proceed, but Proceed with Caution

    When it comes to AI, many don’t want to hear about the risk. Some will go as far as claiming that AI creates no threat to humans or the world we live in. They say, it’s only people that use AI, who create the risk. AI technologies and its interactions with systems. AI can provide great benefits, however, it also brings with it significant risk.

  • Who Decides?

    How should the use of AI be governed when used in safety devices or as part of a safety component? Here are some of the ways AI have improved DSS: Automated Data Analysis : AI algorithms can automatically Machine Learning : AI-enabled DSS can use machine learning algorithms to improve the accuracy of its Advantages of Autonomous Decision-Making using AI: Speed and efficiency: AI can analyze data at a much Disadvantages of Autonomous Decision Making using AI: Lack of human oversight: AI systems can make decisions

  • From Human to Machine: The Evolving Nature of Work in the Digital Age

    continual mechanization of human work, now accelerated by the integration of Artificial Intelligence (AI ) and Agentic AI. AI systems are being deployed to handle everything from customer service inquiries to complex data analysis We now have AI systems to make decisions and perform work with far-reaching consequences without the Approach AI and digital agents as tools to augment human wisdom, not replace it.

  • Have We Reached The End of Software Engineering?

    Unlike the commoditized world of cloud computing and agile development, AI systems need real engineering engineering as the discipline that adapts engineering rigour to whatever digital paradigm emerges: AI We stand at the threshold of the AI era. As a Professional Digital/AI Engineering Advocate, Raimund champions proper licensure across the entire He actively contributes to the profession through his leadership roles, serving as AI Committee Chair

  • The Need For Digital Twin Safety

    Alongside benefits of Digital Twins, the integration of Artificial Intelligence (AI) introduces additional AI algorithms enhance the analysis of data within digital twins, but they also introduce the risk of Moreover, AI-driven decisions may be opaque, making it challenging to understand their rationale and Additionally, complex AI models may lack interpretability, hindering decision-makers' ability to trust AI algorithms drive autonomous actions within digital twins, enhancing efficiency and responsiveness.

  • Keep Humans In The Loop

    When it comes to AI we must: Keep Humans In The Loop When there is a chance of harm, the decision to AI should not make ethical decisions for you. What steps can you take starting today to ensure your organization is responsible with its use of AI?

  • The Trinity of Trust: Monitoring, Observability, and Explainability in Modern Systems

    Understanding how systems behave—whether traditional software or advanced AI—has become essential not For AI systems, monitoring extends to model performance metrics, prediction latency, and data drift detection In AI contexts, observability encompasses the full model lifecycle—from data ingestion through training In AI systems—where complex models often operate as black boxes—explainability techniques like SHAP, Imperative As regulatory pressures intensify across industries—from GDPR's right to explanation to emerging AI

  • Should Using ChatGPT Result in Loss of License to Practice?

    This incident has highlighted the limitations and risks associated with relying solely on AI-generated While ChatGPT and other similar AI tools provide utility across various industries, including the legal This incident not only raised questions about the accuracy of AI-generated content but also emphasized The use of AI tools should never substitute proper legal research and verification. legislation to regulate the use of AI where public safety may be at risk.

  • Operational Compliance

    This is never more important now when it comes the use of Artificial Intelligence (AI). If organizations want to steer aware from harms associated from the use of AI in their value chain, they must explicitly state their objectives for the responsible use of AI.

  • How to perform Gemba Walks for the Information Factory

    processing (removal of waste), data lakes, machine learning, and other forms of artificial intelligence (AI We don’t think we our heads, we think with AI. And for that we need algorithms and AI where the rules are transparent and explainable for people to Don’t only think with your AI, think with your head.

© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page