The Great Software Reset
- Raimund Laqua
- 7 hours ago
- 9 min read

How Enshittification, the Collapse of the Abstraction Stack, and AI Are Rewriting the Rules — and Why Governance Will Determine What Comes Next
Raimund (Ray) Laqua, P.Eng., PMP
Something is breaking, and something else is being born. I think we need to talk about both.
If you work in technology, or if your business depends on technology — which is to say, if you run a business — you’re caught between two forces that are about to reshape everything. One is tearing down the model we’ve relied on for decades. The other is building something we don’t fully understand yet. And the space between those two forces is where the most important decisions of the next decade will be made.
I want to walk through what I’m seeing. Not as a futurist making predictions, but as a computer engineer with over thirty years in heavily regulated industries — someone who has spent a career at the intersection of technology and operational governance. What I see concerns me. Not because change is coming, but because the implications are moving faster than our ability to manage them.
The Diagnosis: Enshittification
Cory Doctorow gave us the word, and it stuck because it’s accurate.
Enshittification describes the lifecycle that digital platforms follow with remarkable consistency. First, they’re good to users — generous, useful, even delightful — because they need to attract them. Then, once the users are locked in, the platform shifts value to business customers — advertisers, vendors, enterprise clients — because they need to attract them too. Then, once both sides are captive, the platform begins extracting all remaining value for itself. Features degrade. Prices rise. The experience hollows out. And everyone stays because the switching costs are too high.
We’ve watched this happen with Amazon, Facebook, Google, and countless SaaS platforms. It’s not a bug in the system. It’s the system working exactly as designed. The incentive structure of platform capitalism leads here inevitably.
If you’re a business leader, you already feel this. You’re paying more for software that does less for you. You’re locked into ecosystems that serve the vendor’s roadmap, not yours. You’re managing integrations between platforms that were designed to be sticky, not interoperable. And every year, the value you extract from these relationships diminishes while the cost increases.
That’s the diagnosis. The current model is failing. Not catastrophically, not all at once, but steadily and predictably. The question is what replaces it.
The Mechanism: The Collapse of the Abstraction Stack
For seventy years, software development has been built on layers of abstraction. Machine code gave way to assembly language. Assembly gave way to high-level languages. Those gave way to frameworks, platforms, orchestration layers, and cloud services. Each layer made it easier for humans to tell machines what to do, but each layer also added distance between the intent and the execution — and each layer became a place where someone could extract rent.
AI is now collapsing those layers.
We’re watching AI move toward writing directly for machine-level execution, skipping the programming language step entirely. AI is creating solutions directly — not writing code that a developer then compiles, tests, debugs, and deploys, but generating functional outcomes from specifications. The orchestration layers that human developers have built and maintained are becoming unnecessary, at least from a human software development perspective.
Think about what that means for the platform model. Every layer of the abstraction stack is a layer where a vendor can insert themselves, charge a fee, and create lock-in. The programming language ecosystem. The framework. The cloud platform. The CI/CD pipeline. The monitoring service. The SaaS application sitting on top. Each is a tollbooth.
If AI collapses those layers — if it can go from intent to execution directly — then it doesn’t just change how software is built. It removes the structural foundation that platform enshittification depends on. You can’t extract rent from a layer that no longer exists.
The Reset
This is where the two forces converge, and this is why I believe we’re looking at a genuine reset in software application development.
Enshittification creates the demand for a reset. Users and businesses are fed up, locked in, overcharged, and underserved. The existing model has exhausted its goodwill. People are ready for something different — they just haven’t had a viable alternative.
The collapse of the abstraction stack creates the supply. AI-driven bespoke generation means you no longer need the platform to get the solution. The intermediary layer — the SaaS vendors, the platform ecosystems, the app stores, the enterprise software companies extracting rent from captive customers — gets compressed or bypassed entirely.
We’re moving from mass-produced software toward bespoke, personal solutions generated on demand. The cloud, which was supposed to be the great centralizer, becomes instead the great personalizer — raw compute and capability that AI draws from to build whatever is needed, whenever it’s needed. Every business, potentially every person, runs on systems tailored precisely to their context.
The SaaS model — the dominant business model in technology for the past two decades — starts to look like a transitional artifact. Something we did because we hadn’t figured out something better yet. And the enshittification that Doctorow described wasn’t a corruption of that model. It was its natural endpoint.
The Danger: Trading One Problem for Another
Now here’s where my concern deepens. Because a reset doesn’t mean things get better automatically. It means the rules are being rewritten. And if we’re not thoughtful about how they’re rewritten, we could end up somewhere worse.
In a world where every business runs on bespoke AI-generated systems, you gain extraordinary customization. Every solution fits like a glove. Every workflow is optimized for the specific context it serves.
But you also lose something critical: standardization, interoperability, and the ability to look under the hood.
If no two systems are alike, how do they talk to each other? If the “code” was never written in a human-readable language, how do you audit it? If the AI generated a solution directly from a specification, and that solution is running your financial transactions, or monitoring your pipeline integrity, or managing your patient records — who verifies that it’s actually doing what it’s supposed to do?
Traditional software validation frameworks were built on a fundamental assumption: that there is human-readable code to inspect. Remove that assumption, and those frameworks collapse.
And here’s the deeper risk: if AI model providers become the new platforms, the same enshittification cycle could repeat at a more fundamental layer. Instead of being locked into a SaaS vendor’s ecosystem, you’re locked into a model provider’s infrastructure. Instead of opaque algorithms deciding what you see on social media, opaque AI systems are running your core business operations. The extraction doesn’t happen at the application layer anymore — it happens at the generation layer. And with even less transparency.
We don’t escape enshittification by collapsing the abstraction stack. We escape it by governing what comes next.
The Questions You Will Eventually Ask
This brings me to the practical reality that every business leader will face, whether they’re ready for it or not:
Are all your AI agents, AI systems, and AI-powered applications actually operating between the lines?
Are they aligned to your business goals — or just running and consuming power and money?
How much are you really spending, and what’s the expected return?
Will your business even be viable going forward, and what reengineering is needed to compete in an AI-powered world?
These aren’t hypothetical questions. They’re operational ones. And most organizations don’t have a framework to answer them — not because they’re negligent, but because the frameworks haven’t been built yet for this new reality.
You Can’t Govern Code That Doesn’t Exist
This is the insight I keep coming back to. If AI is generating solutions directly — bypassing human-readable code, bypassing traditional development pipelines — then you cannot govern these systems the way we’ve governed technology for the past half-century. Code review doesn’t work when there is no code to review. Static analysis doesn’t work when there is nothing static to analyze.
What does work is operational governance. Governing the behavior. Governing the outcomes. Governing the promises.
This is the discipline I’ve spent my career building. In the most heavily regulated industries — pharmaceuticals, medical devices, oil and gas, chemical processing, financial services, government — I’ve learned that compliance at its best is never about inspecting artifacts after the fact. It’s about building operational systems that ensure promises are kept in real time. Promises to regulators. Promises to customers. Promises to every stakeholder who depends on you doing what you said you’d do.
That same operational discipline is exactly what AI governance demands. And as the abstraction stack collapses and the reset unfolds, it may be the only governance that works.
Engineering Discipline in a Post-Code World
I’m a computer engineer by training. I understand the technology at a fundamental level — not just how to use it, but how it works, where it fails, and what it takes to make it reliable. I’m also a licensed Professional Engineer and a certified Project Management Professional, which means I bring engineering discipline and systems thinking to problems that many approach from either a pure technology perspective or a pure policy perspective.
That combination matters more now than it ever has.
In a world where anyone can generate a “solution” but nobody can inspect the internals, the question shifts from “does it work?” to “is it safe, reliable, and fit for purpose?” That is an engineering question, not a programming question. It requires the same rigour we apply to bridges, medical devices, and process plants — systems where failure has consequences.
This is why I serve on ISO’s ESG working group, sit on OSPE’s AI in Engineering committee, and chair the AI Committee for Engineers for the Profession, where we’re advocating for professional engineering standards in digital disciplines across Canada. The licensing and governance structures that protect the public in traditional engineering need to extend into the digital domain — and that need is becoming urgent.
The Energy and Economics Question
There’s another dimension to this reset that doesn’t get enough attention: the economics.
Traditional software follows a “build once, deploy many” model. You invest in development, and that investment scales across users and deployments. The marginal cost of serving one more customer is relatively low. This is the economic engine that made SaaS so attractive — and so profitable for vendors, even as it became less valuable for customers.
Bespoke, AI-generated solutions invert that model. Every solution consumes compute every time it’s generated. There is no “build once” efficiency. The economics shift from capital expenditure on software development to continuous operational expenditure on AI generation and execution.
The question I posed earlier — are your AI systems aligned to your goals, or just running and consuming power and money? — isn’t rhetorical. In this emerging model, it becomes the central business question. Without operational visibility into what your AI systems are actually doing, what they’re costing, and what value they’re returning, you’re flying blind in an increasingly expensive sky.
Cybernetics Comes Full Circle
For those who know my work, you’ll recognize the influence of W. Ross Ashby and the principles of cybernetics in how I approach governance. Ashby’s Law of Requisite Variety tells us that a system’s regulator must have at least as much variety as the system it governs. Simple rules cannot govern complex systems.
As AI systems become more complex, more dynamic, and more opaque, the governance mechanisms must match that complexity. You govern through constraints, feedback loops, and measured outcomes — not through reading source code or ticking compliance checklists. The cybernetic approach to governance, which might have seemed theoretical a few years ago, is becoming the practical necessity.
And in the context of the reset, cybernetics offers something else: a way to prevent the next round of enshittification before it starts. If governance is built into the operational fabric from the beginning — if feedback loops and accountability mechanisms are structural, not afterthoughts — then the extraction playbook becomes much harder to run.
This is operational governance. This is what I do.
*
Building It Right This Time
The reset is real. Enshittification broke the trust. The collapse of the abstraction stack is providing the escape route. AI is rewriting the rules of how software is built, deployed, and consumed.
But a reset is not a guarantee of something better. It’s an opportunity. And opportunities are only as good as the discipline we bring to them.
The businesses that thrive in this new environment will be the ones that don’t just adopt AI, but govern it. That build operational visibility into their AI systems. That treat compliance not as a checkbox exercise but as a living discipline of keeping promises to the people who depend on them. That demand accountability from their AI infrastructure the same way they demand it from their physical infrastructure.
If you’re already asking the hard questions about your AI systems — about alignment, about spend, about viability, about what reengineering is needed to compete — then you’re ahead of most. And I’d welcome the conversation.
We have a rare chance to build something better. Let’s not waste it by repeating the same mistakes at a deeper layer.
The window between “we should figure this out” and “we should have figured this out” is closing.
Let’s not wait.
About the Author
Raimund (Ray) Laqua, P.Eng., PMP, is a computer engineer and the founder of Lean Compliance Consulting and co-founder of ProfessionalEngineers.AI. With over 30 years of experience across highly regulated industries, Ray specializes in operational AI governance and compliance. He serves on ISO’s ESG working group, OSPE’s AI in Engineering committee, and chairs the AI Committee for Engineers for the Profession (E4P), advocating for professional engineering standards in digital disciplines across Canada.