Why Your IT Playbook Won't Work for AI Systems
- Raimund Laqua

- Aug 30
- 5 min read

Organizational leadership faces a critical decision: apply familiar commodity IT approaches to AI development or invest in systematic design processes for fundamentally different technology. The wrong choice creates cascading risks that compound as AI systems learn and adapt in unpredictable ways.
A Fundamental Difference
Commodity IT succeeds with assembly and agile approaches because it works with predictable components that have stable behaviours and known interfaces. A database or API behaves consistently according to its specifications, making integration challenges manageable and testing straightforward. Development teams can iterate rapidly because outcomes are predictable and systems remain stable after deployment.
AI Systems violate every assumption that makes commodity IT approaches successful. These systems change behaviour over time through learning and adaptation, making their responses non-deterministic and their long-term behaviour unpredictable. Unlike traditional software that executes according to programmed logic, AI systems evolve their responses based on new data, environmental changes, and feedback mechanisms—creating fundamentally different engineering challenges.
Why Familiar Approaches Fail
Assembly Approaches appear to work initially but break down under real-world conditions. What looks like "assembling" pre-built AI components actually requires substantial custom engineering to handle behavioural consistency across model updates, performance monitoring as systems drift, bias detection and correction as they adapt, and compliance maintenance as behaviour evolves. The integration complexity is magnified because AI components can change their characteristics over time, breaking assumptions about stable interfaces.
Agile Limitations become apparent when dealing with systems that require extended observation periods to reveal their true behaviour. Traditional sprint cycles assume you can fully test and validate functionality within short time-frames, but AI systems may only reveal critical issues weeks or months after deployment as they learn from real-world data. The feedback loops that make agile effective in commodity IT don't work when system behaviour continues evolving in production.
Testing Assumptions fail because AI systems don't produce repeatable outputs from identical inputs. Traditional testing validates that a system behaves according to specifications, but AI systems are designed to adapt and change. Point-in-time validation becomes meaningless when the system you tested yesterday may behave differently today based on what it has learned. This requires fundamentally different approaches to verification and validation.
The Engineering Necessity
AI's adaptive and unpredictable nature makes disciplined design processes absolutely essential. Organizations must develop new capabilities specifically designed to control and regulate AI technology.
Goal Boundaries must be explicitly designed to define what the system should optimize for and what constraints must never be violated. Without systematic design for acceptable learning parameters, AI systems can adapt in ways that conflict with business objectives, ethical requirements, or regulatory compliance.
Behavioural Governance requires systematic approaches for monitoring, evaluating, and controlling AI system behaviour as it evolves. This includes creating capabilities for detecting when systems drift outside acceptable boundaries and designing interventions to correct problematic adaptations before they cause operational or compliance issues.
Continuous Verification becomes essential because AI systems require ongoing monitoring rather than periodic validation. Organizations must build systems with comprehensive monitoring capabilities that track not just performance metrics but behavioural evolution, bias emergence, and compliance drift throughout the system life-cycle.
Adaptation Management demands new processes for managing beneficial learning while preventing harmful evolution. This includes designing model versioning and rollback capabilities, creating human oversight mechanisms for critical adaptations, and building processes for systematic feedback and correction.
Strategic Implications
The choice between commodity IT approaches and AI engineering has profound strategic consequences that will determine organizational success in an AI-driven competitive landscape.
Competitive Risk emerges when organizations treat AI systems like traditional software. Ad-hoc approaches create operational risks that compound as systems evolve unpredictably, while engineered approaches enable organizations to deploy AI capabilities that adapt within controlled boundaries and provide sustainable competitive advantages through reliable performance.
*Regulatory Exposure is amplified by AI's adaptive nature. The EU AI Act and emerging regulations specifically address systems that change behaviour over time, creating significant liability for non-compliant adaptive systems. Organizations using static approaches face unknown compliance gaps that multiply as their systems learn and evolve, while engineered design provides verifiable compliance and defensible audit trails.
Technical Debt Accumulation happens faster with AI systems because each quick implementation becomes a maintenance burden requiring specialized oversight. Ad-hoc AI deployments create knowledge silos and operational dependencies that become increasingly expensive to manage. Systematic approaches build reusable capabilities and organizational knowledge that compound value rather than costs.
Organizational Capability determines long-term success in AI deployment. The scarcity of AI talent makes internal capability development critical, but commodity IT approaches don't develop the specialized knowledge needed for managing adaptive systems. Systematic engineering approaches create organizational expertise in designing, deploying, and governing AI systems that becomes increasingly valuable as AI adoption scales.
Choose Wisely
Organizational leadership must choose between two fundamentally different approaches to AI development:
Continue with IT Playbook: Apply familiar assembly and agile methods, hoping that AI components will integrate smoothly and systems will learn beneficially without systematic oversight. This approach appears faster initially but creates compounding risks as systems adapt in unpredictable ways.
Invest in AI Engineering: Develop systematic design capabilities specifically for adaptive systems, creating controlled learning environments with proper governance, monitoring, and intervention capabilities. This approach requires upfront investment but builds sustainable AI capabilities with manageable risks.
Bottom Line
AI systems are not commodity IT with different algorithms—they are fundamentally different technology that requires the application of engineering methods and practice. The adaptive capability that makes AI powerful also makes design essential because these systems will continue changing behaviour throughout their operational life-cycle.
Organizations that continue applying familiar IT methods to AI will create operational risks, compliance gaps, and technical debt that become increasingly expensive to address as their systems scale and evolve. Those that invest in engineered approaches will build sustainable competitive advantages through reliable AI capabilities that adapt within controlled boundaries.
Skipping the engineering stage to accelerate AI adoption is not only unwise, it's a failure to exercise proper duty of care.
About the Author:
Raimund Laqua, P.Eng, is a professional computer engineer with over 30 years of expertise in high-risk and regulated industries, specializing in lean methodologies and operational compliance. He is the founder of Lean Compliance and co-founder of ProfessionalEngineers.AI, organizations dedicated to advancing engineering excellence.
As a Professional Digital/AI Engineering Advocate, Raimund champions proper licensure across the entire spectrum of digital engineering disciplines. He actively contributes to the profession through his leadership roles, serving as AI Committee Chair for Engineers for the Profession (E4P) and as a member of the Ontario Society of Professional Engineers (OSPE) working group on AI in Engineering, where he helps shape the future of professional engineering practice in the digital domain.


