top of page

A Safety Model for AI Systems

As a framework, I thought Nancy Leveson’s Hierarchical Safety Model which incorporates Rasmussen’s risk ladder offers the right level of analysis to further the discussions regarding responsible and safe AI systems.


Nancy is a professor at MIT and author of what is known as STAMP / STPA - a systems approach to risk management. In a nutshell, instead of thinking about risk in terms of only threats and impacts, she suggests we consider systems as containing hazardous processes which create the conditions for risk to manifest and propagate. This holistic approach is used in aerospace along with other high-risk endeavours.


The following diagram is a slightly modified version of her model outlining engineering activities across system design/analysis, and system operations. This framework also shows where government, regulators, and corporate policy intersect which is critical to staying between the lines and ahead of risk.


At this level of analysis we are talking about AI Systems (i.e. engineered systems) not about systems that use AI technology (Embedded AI). However, this could be extended to support the latter.


A key takeaway is that AI engineering must incorporate and ensure responsible and safe design & practice across the socio-technical system, not just the AI technology. This is where professional AI engineers are most helpful and needed.


Interested to hear your thoughts on this …



11 views

Comments


bottom of page