top of page

Is This The Best GRC Has To Offer?


 


I just attended a webinar from a leading GRC vendor promoting continuous risk assessment for AI. The topic seemed timely and the solution promising, so I gave it my full attention.


What I heard: AI introduces significant risk across organizations and within every functional silo.


Fair enough.


⚡ The pitch: With all this risk, you need a system to manage it comprehensively.


OK.


What they demonstrated was little more than a risk register combined with task management—where tasks are defined as regulatory requirements, framework objectives, and controls tagged with risk scores. The only novel feature was hierarchical task representation.


Everything else was standard fare, complete with the obligatory heat map.


⚡ Not Understanding AI Risk


Risk was presented as the typical likelihood x severity calculation. They tried to present risk aggregation, but here's the issue: you can't simply add up risks and average them.


Risk is stochastic. Proper aggregation requires techniques like Monte Carlo simulation across probability density functions for each risk.


It's even better when you understand how risk-connected elements interact, enabling evaluation of risk propagation through the system.


The bottom line: This was traditional (and basic) risk management applied to AI—and done poorly. 


The promise of continuous risk assessment tied to AI was not delivered.


⚡ What AI Risk Actually Requires


If this represents the best that GRC can offer for AI, we're in deep trouble.


With infinite possible inputs and outputs, generative AI is better described as an organizational hazard rather than a foundation for stable, predictable performance.


We need:


  • Real-time controls, monitoring, and assessments

  • Managed risk, not just bigger risk management databases


And we need all of this to be operational.


⚡ Learning From Other Risk Domains


Perhaps we should adopt risk measures and methods from high-hazard sectors:


  • Hazard isolation

  • HAZOP studies

  • Functional and process safety approaches

  • STAMP/STPA/CAST analysis

  • Cybernetic regulation

  • And others


Regardless of methodology, we need advanced software engineered for adaptive real-time systems—not yesterday's tools repackaged.


The alternative? What many companies are doing now: buying bigger databases to track all the new risks they've created by deploying AI.


We can—and must—do better.


If you're looking to effectively contend with AI risk within your organization—beyond heat maps and risk registers—let's talk. I work with organizations to build operational approaches that actually manage hazards in real time, not just document them.

bottom of page