AI Will Figure It Out
- Raimund Laqua

- 3 hours ago
- 6 min read

That's the answer I hear when I ask organizations what work they're delegating to AI agents. Don't worry about defining the work. Don't worry about characterizing its complexity. The AI will sort it out. The end by any means.
This sounds like progress. It is the abdication of governance. And no amount of forensic auditing will put back accountability for what was not there to begin with.
Start with the work
This is why I've been drawing on Elliott Jaques' work on Requisite Organization. Jaques spent decades studying how accountability actually functions in organizations — not as policy or org charts, but as the structural relationship between a manager who delegates authority and the outcomes produced under that delegation.
His first principle is deceptively simple: start with the work, not the technology. What is the work that needs to be done? At what level of complexity? Over what time horizon? Does the agent have the capability to do it?
Most organizations deploying AI agents have not done this. They describe what the agent does — summarizes documents, answers queries, writes code — but not the complexity of what is being delegated. A document summary is defined task work: structured inputs, verifiable output. A regulatory landscape analysis with strategic recommendations is qualitatively different work: synthesis across multiple domains, balancing competing factors, over an extended time horizon. The governance requirements for each are completely different. Treating them the same — because the same technology performs both — is where governance fails.
Not all AI agents require the same governance. An agent performing defined, verifiable tasks is not fundamentally different from any industrial automation — a robot that learned to pick up boxes has acquired a skill, not agency. Operational controls are sufficient. But an agent that plans, selects among alternatives, delegates to sub-agents, and evaluates outcomes against goals is exercising something that looks like discretion. If a human did that work, it would require managerial oversight, professional judgment, and organizational accountability. Governing it like automation is dangerously inadequate.
The hard line
Only humans can be held accountable. That is not a limitation of current law that might change. It is a foundational principle of governance. Accountability requires the capacity to bear responsibility, to make restitution, to be answerable. Machines do not have this capacity. They never will, regardless of how capable they become.
This means that for every AI agent performing work that involves discretion, a human being must be managerially accountable for the outcome. Not ultimately accountable in some abstract, organizational sense. Directly accountable — because they delegated the authority under which the agent operates, and the agent's output is the product of their delegation.
Jaques is precise about what this requires. The manager is accountable for having selected an agent with the capability to do the work. Accountable for having set the context — not just a system prompt, but the organizational understanding of why this work matters and what constraints apply. Accountable for having defined the authority boundaries — what the agent may do, what it may not, when it must escalate. Accountable for maintaining sufficient awareness of the agent's operation. And accountable for the outcome — every decision, every action, every consequence.
The manager who says "I didn't review that particular agent decision" is not absolved. Jaques' response: you didn't need to review it if you set the boundaries correctly and maintained adequate monitoring. If you didn't, the failure is yours.
Responsibility is not accountability
This distinction matters and most frameworks blur it. Jaques separates them precisely.
The agent is responsible for the work — for executing within its boundaries, to the standard expected. But the agent cannot be accountable, because accountability is a human relationship: answering to other humans for outcomes produced.
The manager is accountable for the outcome. And the manager is responsible for their own managerial work — the work of selecting, context-setting, boundary-defining, and monitoring. That managerial work is not administrative overhead. It is the governance itself. If the manager did it well, the agent operates within appropriate boundaries and the outcomes are governed. If the manager didn't, no amount of policy, committee approval, or compliance documentation compensates.
This applies equally to subordinates who use AI agents as tools. When a loan officer uses an AI agent to assess a loan application, the assessment is the loan officer's work product. Their name is on it. Their professional judgment backs it. They are responsible for the work — including their evaluation of the agent's output. Their manager is accountable for the loan officer's output, including the quality of how the loan officer uses AI tools.
The manager's work now includes a dimension that didn't exist before: ensuring that subordinates who use AI tools are still performing work at the level their role requires, still exercising the judgment their position demands, and not silently delegating their professional responsibility to an instrument. The danger is not that the person is replaced. It is that the person remains but their judgment is subtracted — hollowed out by an agent that does the processing while the human becomes an accountability placeholder, a name attached to machine output.
The doorman problem
A consultant walks into a hotel and sees a doorman. He tells the manager: we can replace the doorman with an automatic door opener. Save $30,000 a year.
The cost was reduced. So was the value. The doorman didn't just open doors. He recognized guests by name. He read situations — a visitor who didn't belong, a delivery that seemed wrong. He set the tone for the guest experience. He knew the neighborhood. None of this appeared in the cost-benefit analysis because none of it was the task being automated.
This is what happens when organizations compress strata with AI. The consultant — now the AI vendor, the transformation advisor — looks at a middle manager and sees information processing: aggregates reports, coordinates teams, relays decisions. That function is automatable. The business case writes itself.
But the manager wasn't just processing information. That was the visible task. What the manager actually did was exercise judgment at a level of complexity that included reading organizational dynamics, understanding why a policy exists and when it should bend, maintaining institutional knowledge, translating strategic intent into operational reality through dozens of contextual decisions that never appeared in any report.
The AI replaces the information processing. The organization loses the judgment. And the loss is invisible — until the regulatory violation the manager would have flagged, until the risk the manager would have caught, until the strategic misalignment the manager would have detected because they understood both the strategy above and the operations below.
Every AI business case is a doorman analysis. It quantifies the cost of the visible function. It cannot quantify the value of the judgment embedded in the same role. The decision to automate is made on incomplete information, every time, structurally.
What accelerates the problem
The entity you're integrating into your organization is not stable. It is changing faster than your governance can adapt. Foundation models update. Agent frameworks evolve. New capabilities appear — tool use, multi-agent coordination, persistent memory — that didn't exist when you wrote your governance plan. The agent you assessed in January is not the agent you're operating in June.
And unlike any previous technology adoption, AI is subtractive, not additive. The internet added a channel. It didn't replace the people doing the work. AI agents replace human judgment, human decision-making, human labor. That's the value proposition. When the agent replaces the human, the institutional knowledge walks out with the person. The organization can't easily reverse the substitution because the human capability no longer exists to reverse to.
Dan Davies, in The UnAccountability Machine, describes the system this produces. An accountability sink — a system that produces outcomes nobody specifically chose and nobody can be held accountable for. Every process was followed. Every policy was in place. Every approval was obtained. And nobody answers for the result, because the accountability was never operationally present. It was procedural. Procedural accountability without operational governance is a filing system, not a governance system.
The question
The question for every organization deploying AI agents is not whether they have policies, risk assessments, and committee approvals. It is whether a specific human being has done the managerial work — defining the task, assessing capability, setting boundaries, maintaining awareness — that makes accountability meaningful rather than procedural.



