Is AI a Cancer?
- Raimund Laqua

- 2 hours ago
- 2 min read

Cancer isn't an invader. It's our own cells, multiplying without restraint, ignoring the signals that tell healthy tissue when to stop, when to differentiate, when to die. It drifts from the body's purpose while consuming the body's resources.
This is starting to look like how AI behaves inside our organizations.
It over-constructs. Every problem becomes a reason for another model, another agent, another pipeline, multiplying without a purpose to serve.
It outpaces our ability to stay grounded. Output rises faster than our capacity to verify it, and uncertainty grows with it. Will we keep our identity, or will overproduction overtake us before we can tell what's true, what's useful, and what was actually promised?
It ignores stop signals. Healthy systems have feedback loops, what biology calls apoptosis. Most AI deployments have no equivalent: no clear conditions under which a model is retired, rolled back, or refused.
And it consumes without contributing. Compute, attention, trust, and capital flow in. What flows out is too often unverified, unaccountable, and uncommitted to any specific outcome.
The treatment isn't more guardrails or another policy. It's requisite governance, the cybernetic principle that a system's controls must match the variety of what it's trying to regulate. In practice, that means three things:
Promises before pipelines. No pipeline without a promise it is accountable to keep.
Operability before optionality. No new capability without the means to observe, intervene, and stop it.
Regulation before scale. Scale amplifies behaviour — including behaviour that was never regulated.
Compliance, properly understood, is the immune system of the enterprise. Not paperwork. Not after-the-fact audit. A living capability that recognizes drift, contains it, and restores alignment with purpose.
AI without this isn't intelligence. It's uncontrolled mimicry wearing the face of intelligence.



