The New Face of AI Assurance: Why Audits and Certifications Are Not Enough
- Raimund Laqua
- May 19
- 2 min read
AI Assurance isn't just about checking boxes before deployment. As the European Defence Agency shows us, it's now a continuous journey involving rigorous engineering and real-time monitoring. With today's AI systems, we simply can't predict everything in advance—we need to stay vigilant while they're running in the real world. This shift is especially crucial in high-risk, mission-critical applications where failure isn't an option.
In the paper published by the European Defence Agency (EDA), entitled “Trustworthiness for AI in Defence”, they discuss the difference between Development and Runtime Assurance.
⚡️ Development Assurance:
“Traditionally in system engineering (including software and hardware), the term assurance defines the planned and systematic actions necessary to provide confidence and evidence that a system or a product satisfies given requirements. A process is needed which establishes levels of confidence that development errors that can cause or contribute to identified failure conditions (feared events defined by a safety/security/human factor assessment) have been minimized with an appropriate level of rigor. This henceforth is referred to as the development assurance process.”
⚡️ Runtime Assurance:
“When the system is deployed in service, runtime assurance refers to a set of techniques and mechanisms designed to ensure that a system behaves correctly during its execution. This involves monitoring the system's behaviour in real-time and taking predefined actions to correct or mitigate any deviations from its expected performance, safety, or security requirements. Runtime assurance can be particularly important in critical and/or autonomous … systems where failures could lead to significant harm or loss.”
The evolution of the balance between development assurance and runtime assurance is shown in the following figure:

The introduction of AI technologies and autonomy capabilities has tipped the balance towards needing greater runtime assurance, as comprehensive a priori development assurance activities become increasingly challenging.
These same definitions can be used for AI assurance in commercial applications, particularly for high-risk, mission-critical applications:
AI Assurance involves:
planned and systematic actions necessary to provide adequate confidence and evidence that the AI system satisfies the intended function (System Assurance)
a process to establish levels of confidence that design/development errors (risk) have been minimized with appropriate level of rigour. (Development Assurance)
a set of techniques and mechanisms designed to ensure the system behaves correctly during its execution. (Operational Assurance)
The paper is available here: https://eda.europa.eu/docs/default-source/brochures/taid-white-paper-final-09052025.pdf