Intelligent Design for Intelligent Systems: Restoring Engineering Discipline in AI Development
- Raimund Laqua

- Aug 17, 2025
- 7 min read
Updated: Aug 28, 2025

The Current Challenge
AI systems are increasingly deployed without the systematic design approaches that have proven effective in other engineering disciplines. Development teams often prioritize rapid deployment over comprehensive analysis of system behaviour and potential consequences, viewing detailed design work as an impediment to progress.
This approach has led to AI systems that exhibit unintended biases, perform poorly in edge cases, or create consequences that become apparent only after deployment. These issues typically stem not from poor intentions, but from the absence of established design practices that help engineers anticipate and address such problems systematically.
This represents a significant challenge for the engineering profession as AI systems take on increasingly critical roles in society.
The Divergence of Engineering Practices
The software industry's adoption of agile methodologies and rapid iteration cycles has brought valuable benefits in flexibility and responsiveness to changing requirements. However, these approaches have also shifted emphasis away from comprehensive upfront design toward emergent solutions that develop through iterative refinement.
This shift made sense for many consumer applications where failures result in minor inconvenience and rapid correction is possible. However, applying the same approach to AI systems that influence significant decisions—about loans, healthcare, employment, or criminal justice—may not be appropriate given the different risk profiles and consequences involved.
The gap between current AI development practices and established engineering design principles has widened precisely when AI applications have become more consequential. This divergence raises fundamental questions about professional standards and public responsibility.
Lessons from Engineering Disciplines

Other engineering fields offer instructive examples of how systematic design practices manage complexity and risk. These examples aren't perfect templates for AI, but they illustrate principles that could be adapted.
Process Engineering Excellence
Chemical engineers approach new processes through systematic analysis. They begin with fundamental principles—mass balances, energy balances, reaction kinetics, thermodynamics. Hazard analysis follows: What can go wrong? How likely is it? What are the consequences? Safety systems are designed to handle credible failure scenarios, control strategies are developed, and process flow diagrams are created. Only then does detailed engineering and construction begin.
This methodical approach doesn't guarantee perfection, but it systematically addresses known risks and creates documentation that helps future engineers understand design decisions. When problems arise, the design history provides context for effective troubleshooting and modification.
Medical Device Standards
The medical device industry operates under regulatory frameworks that require comprehensive design controls. Companies must demonstrate systematic design planning, establish clear requirements, perform risk analysis, and validate that devices meet their intended use. Design History Files document not just final specifications, but the reasoning behind design choices, alternatives considered, and risk assessments performed.
This documentation serves multiple purposes: regulatory compliance, quality assurance, and knowledge transfer. When devices perform unexpectedly or require modification, engineers can trace decisions back to their original rationale and assess the implications of changes.
Aerospace and Nuclear Precedents
High-consequence industries like aerospace and nuclear engineering demonstrate how design rigour scales with potential impact. Multiple design reviews, extensive analysis and simulation, redundant safety systems, and comprehensive documentation are standard practice. The principle of defence in depth ensures that no single failure leads to catastrophic outcomes.
These industries accept higher development costs and longer timelines because the consequences of failure justify the investment in thorough design. They've learned through experience that shortcuts in design often lead to much higher costs later.
The Unique Nature of AI Systems

AI systems present design challenges that both parallel and extend beyond those in traditional engineering. Understanding these characteristics is essential for developing appropriate design approaches.
AI systems exhibit emergent behaviours that can surprise even their creators. Unlike a chemical process whose behaviour follows predictable physical laws, AI systems learn patterns from data that may not be obvious to human designers. A trained model's decision-making process often remains opaque, making it difficult to predict behaviour in edge cases or novel situations.
This opacity doesn't excuse engineers from design responsibility—it demands more sophisticated approaches to understanding and validating system behaviour. Traditional testing methods may be insufficient for systems that can behave differently with each new dataset or operational context.
AI systems also evolve continuously. Traditional engineered systems are static once deployed, but AI systems often adapt their behaviour based on new data or feedback. This creates ongoing design challenges: How do teams maintain safety and reliability in systems that change their behavior over time? How do they validate performance when the system itself is learning and adapting?
The societal implications of AI systems amplify these technical challenges. When AI systems influence medical diagnoses, financial decisions, or criminal justice outcomes, their effects ripple through communities and institutions. Design decisions that seem purely technical can have profound social consequences.
Design: The Missing Foundation

The software industry has developed a narrow view of what design means, often reducing it to user interface considerations or architectural patterns. True engineering design is a more fundamental process—the intellectual synthesis of requirements, constraints, and knowledge into coherent solutions.
Effective design involves creativity within constraints. It requires understanding problems deeply enough to anticipate how solutions might fail or create unintended consequences. It demands making explicit trade-offs rather than allowing them to emerge accidentally through implementation.
Design also involves systematic thinking about the entire system life-cycle. What happens when requirements change? How will the system behave in unexpected conditions? What knowledge must be preserved for future maintainers? These questions require deliberate consideration, not emergent discovery.
The absence of systematic design shows up predictably in AI projects: requirements that conflate technical capabilities with user needs, no systematic analysis of failure modes or edge cases, ad hoc validation approaches, unclear definitions of acceptable performance, and changes made without understanding their broader implications.
Toward Intelligent Design for AI
Developing design practices for AI systems requires adapting proven engineering principles while acknowledging the unique characteristics of intelligent systems. This adaptation represents an evolution of engineering practice, not a rejection of software development methodologies.
Adaptive Requirements Management: Traditional design assumes relatively stable requirements, but AI systems often operate in environments where requirements evolve with understanding. Design processes must accommodate this evolution while maintaining clear criteria for success and failure.
Systematic Behaviour Analysis: Since AI behaviour emerges from training data and algorithmic interactions, design must include systematic approaches to understanding and predicting system behaviour. This includes analyzing training data characteristics, assessing potential biases, and evaluating performance across diverse scenarios.
Dynamic Validation Frameworks: Static validation is insufficient for systems that continue learning. Design must incorporate ongoing validation approaches that can detect when system behaviour drifts from acceptable parameters or when operating conditions exceed design assumptions.
Living Documentation: Design documentation for AI systems must evolve with the systems themselves. This requires new approaches to capturing design rationale, tracking changes, and maintaining understanding of system behaviour over time.
Risk-Proportionate Processes: The level of design rigour should correspond to system impact and risk. Consumer recommendation systems warrant different treatment than medical diagnostic tools, but both require systematic approaches appropriate to their consequences.
Transparency by Design: While AI systems may be inherently complex, their design processes need not be opaque. Building in explainability, auditability, and interpretability from the beginning makes it easier to understand, validate, and maintain system behaviour.
These approaches don't slow development when implemented thoughtfully. Instead, they can accelerate progress by identifying issues early, building stakeholder confidence, and reducing costly failures that result from inadequate planning.
The Professional Obligation
Effective design practices represent more than technical improvements—they reflect professional responsibility. Engineers in Canada have an obligation to consider public welfare when developing systems that affect people's lives, careers, and opportunities.
The software industry's acceptance of post-deployment fixes may be adequate for many applications, but becomes problematic when applied to AI systems with significant societal impact. When AI systems influence medical treatments, criminal justice decisions, or economic opportunities, the traditional "patch it later" approach may not align with engineering ethics and professional standards.
This shift in perspective requires acknowledging that AI development has moved beyond the realm of experimental software into the domain of engineered systems with real-world consequences. With this transition comes the professional obligations that engineers in other disciplines have long accepted.
Engineers need to consider what society expects when AI systems are deployed in critical applications. Each unaddressed bias, unanticipated failure mode, or unhandled edge case represents a choice about acceptable risk that deserves deliberate consideration rather than default acceptance.
Intelligent Design as Professional Practice
The engineering profession stands at a critical juncture. AI systems are becoming more capable and widespread, taking on roles that directly affect human welfare and social outcomes. The practices that guide their development will shape not only technological progress but also public trust in engineering expertise.
We need intelligent design practices that match the sophistication of the artificial intelligence we're creating. This means design approaches that can handle uncertainty and adaptation while maintaining safety and reliability. It means documentation that evolves with systems rather than becoming obsolete artifacts. It means validation approaches that continue throughout system lifecycles rather than ending at deployment.
The goal isn't to slow AI development with bureaucratic processes, but to accelerate responsible innovation through better engineering practices. Other engineering disciplines have learned that systematic design ultimately speeds development by preventing costly mistakes and building stakeholder confidence.
Developing these practices will require collaboration across multiple communities: AI researchers who understand algorithmic behaviour, software engineers who build production systems, domain experts who understand application contexts, and engineers from other disciplines who bring experience with systematic design.
The transformation won't happen overnight, but it can begin immediately with recognition that AI systems deserve the same thoughtful design consideration we apply to other engineered systems that affect public welfare. This means asking harder questions about requirements, spending more time analyzing potential failure modes, documenting design decisions more thoroughly, and validating performance more systematically.
A Professional Opportunity
The engineering profession has historically risen to meet new challenges by adapting its principles to emerging technologies. From mechanical systems to electrical power to chemical processes, engineers have learned to apply systematic design thinking to complex systems with significant societal impact.
AI represents the latest such challenge—and perhaps the most important. These systems will increasingly shape economic opportunities, healthcare outcomes, transportation safety, and social interactions. The design practices we establish now will influence how AI develops and how society experiences its benefits and risks.
This is fundamentally about professional identity and public responsibility. Engineers have always carried the obligation to consider the broader implications of their work. As AI systems become more powerful and pervasive, that obligation becomes more pressing, not less.
The question facing the profession isn't whether AI development should be subject to engineering design principles, but how those principles should evolve to address the unique characteristics and growing importance of intelligent systems. The answer will determine not only the technical trajectory of AI, but also whether the engineering profession continues to merit society's trust in an age of artificial intelligence.
We need intelligent design as much as we need artificial intelligence—perhaps more.
The two must develop together, each informing and strengthening the other, as we navigate toward a future where engineered intelligence serves human flourishing reliably, safely, and ethically.
About the author:
Raimund Laqua is a Professional Engineer, founder of Lean Compliance, co-founder of ProfessionalEngineers.AI, and AI Committee Chair for E4P.


