The Capability Debt

March 2026

Or: Why the Question Is the Last Thing That Cannot Be Delegated


1. Der Strich an der Tafel

My father Mathias studied water management engineering at the University of Magdeburg in the early 1970s. This was the DDR — the German Democratic Republic — where universities were institutions of the state, professors held considerable formal authority, and students who stepped out of line were reminded of it in ways that could follow them for years.

One semester, a professor in his department imposed a dormitory curfew. The reasons were never made entirely clear. Mathias, then in his early twenties and not naturally inclined toward quiet compliance, violated it. The professor found out.

He did not shout. He did not file a disciplinary report. He told Mathias to come to the lecture hall the following morning and stand at the blackboard.

Malen Sie einen Strich, he said. Draw a line.

Mathias picked up the chalk and drew a line. Vertical.

Not vertical. Horizontal.

He erased it and drew another.

Shorter.

He made it shorter.

Not there. Lower.

An hour passed like this. Mathias drew. The professor corrected. There was no explanation of what the line was for, or where it needed to go, or what it was supposed to represent. Just the instruction, and the correction, and the chalk dust settling into the grooves of the floor. When it was over, the professor dismissed him without comment.

Mathias walked out of that room convinced of one thing: he had been subjected to a petty, bureaucratic punishment by a man who had the power to waste his time without explanation.

He carried that interpretation for fifty years.


On a Sunday morning not long ago, we were sitting at the breakfast table. He told me the story — not for the first time, but something in the telling felt different. He was recounting it with the mild, unresolved irritation of someone describing a score that was never settled, a slight that could not be properly answered because the man who delivered it was long dead and the system that enabled him had ceased to exist.

I listened. And when he finished, I said: “Papa, I think the professor was trying to teach you something.”

He looked at me with the patience of a man who has heard his children misread situations before.

“You were all going to be engineers,” I said. “Water management, civil infrastructure, hydrology. And in those fields — in any engineering field — when someone tells you to draw a line, the correct response is not to pick up the chalk. The correct response is to ask: In what thickness? How long? At what angle? From where to where? Under what load? For what purpose? The engineer asks for the specification before a single piece of chalk touches the board. Your professor stood at that blackboard for an hour, waiting for you to ask the question. Not just to do as you were told.”

Mathias was quiet for a moment. He set down his coffee cup.

“I thought he was punishing me,” he said.

“Maybe he was also teaching you. The two are not mutually exclusive.”

He paused. “Felix,” he said slowly, “I think you might be right.”

Fifty years. An hour at a blackboard. A question that was never asked.


I am not telling this story to rehabilitate a professor who may well have been a petty bureaucrat extracting satisfaction from a student he disliked. I am not telling it to position myself as the one who finally cracked the meaning. I am telling it because the gap between an instruction and a specification — between draw a line and draw a 2mm horizontal line at 40cm from the upper edge, indicating the projected groundwater level under load — is not a small gap. It is the entire discipline of engineering, compressed into a single moment.

The professor was not testing Mathias’s obedience. He was testing whether Mathias understood that obedience without specification is not engineering. It is performance. The line drawn in response to draw a line is not a professional deliverable. It is a gesture. It fills the silence. It satisfies no technical requirement, answers no design question, solves no problem. It only demonstrates that someone complied.

Engineers are not supposed to simply comply. They are supposed to ask.

And what I have been turning over in my mind since that Sunday morning is this: the distinction the professor tried to teach — a principle that should be the bedrock of any discipline building things other people must trust — is exactly what our industry has most systematically eradicated.

Fifty years after Mathias stood at that blackboard, the chalk is digital. The blackboard is a project management system. The lecture hall is an open-plan office. And the professor’s hour-long exercise has been replaced by a five-second ticket creation:


Ticket-ID 4092. Title: Kafka Export. Description: As a system, I want to push a JSON payload to the Kafka topic.


The student is still just drawing the line.


2. The Requirements Engineer

In the early spring of 2013, about forty years after my father drew his line, I sat in a lecture hall in Heilbronn. Another professor. Another instruction.

The course was called Software Development Lab — a semester-long simulation in which three teams of six would each form a fictional consulting firm and build a warehouse management system from scratch. We would define requirements, design architecture, divide the work, and deliver. The professor would evaluate not just the output, but the process.

I had been writing software since I was fourteen. By the time this course began, I had years of practical experience — not academic, not theoretical, but the kind that comes from building things that had to actually work. I had seen what happens when requirements are written by people who do not understand what those requirements will produce. I had watched projects collapse not because the developers were incompetent, but because nobody had asked what the system actually needed to do before the first line of code was written.

So when the professor outlined his methodology, I found myself in disagreement. Not with the goal. With the process. The way requirements were being collected, organized, and handed to development teams would, in my estimation, produce exactly the kind of ambiguous, contradictory specifications that turn capable engineers into expensive guessers.

I did not raise my hand. I did not challenge him in front of the group. I sent him an email. Then another. I made the case on technical grounds, methodically, without drama.

He wrote back: Herr Radzanowski, übertreiben Sie es nicht. Ich sitze am längeren Hebel.

Mr. Radzanowski, do not push this. I hold the longer lever.

I kept pushing.

At the end of the semester, our team delivered a presentation that was, by the professor’s own assessment, the cleanest and most structurally sound submission in that year’s cohort. He stood in front of the room and said: Herr Radzanowski, wenn jemand eine 1,0 verdient hat, dann Sie.

Mr. Radzanowski, if anyone deserves a 1.0, it is you.

I had refused to just draw the line.


I tell this story not as a victory narrative. The grade is not the point.

The point is what happened between the first email and the final presentation: the friction. The professor who did not want to be questioned. The institutional resistance to anyone who slows down the process by asking why. The quiet pressure to just draw the line and move on, because the schedule does not accommodate epistemological discipline.

This friction is not a feature of one professor’s personality. It is structural. In almost every organization I have encountered since, the person who asks for the specification before starting work is experienced as an obstruction. Not because their question is wrong — it is almost always right — but because the question reveals that the work cannot yet begin. And acknowledging that is expensive. It means postponing the feeling of progress. It means admitting that the brief was incomplete. It means accepting that the time saved by skipping the question will be borrowed, with interest, from the delivery.

The system does not reward this kind of honesty. In my case, the professor eventually rewarded it. In later years, in larger organizations, the same behavior would be catalogued differently — not as rigor, but as poor stakeholder management. Not as engineering, but as friction that could not be productively channeled. The grade changes. The pattern does not.


Four years later, I found myself trying to scale that exact same friction across an entire enterprise.

In early 2017, I joined what was then a small digital commerce division of a major European retailer. Fourteen people. I was number fourteen. The ambitions were large: build a scalable e-commerce platform capable of supporting expansion across multiple countries, integrate with dozens of backend systems, and do it without the organizational dysfunction that had derailed every previous attempt. The leadership understood, early, that the bottleneck was not developer capacity. It was specification quality.

You cannot build a distributed system on ambiguous requirements. Every piece of ambiguity in the initial specification becomes a divergence point downstream. One team implements a boundary condition one way. Another team implements it differently. Six months later, when the systems need to talk to each other, they speak different dialects. The integration cost is not linear — it compounds. The gap between what was intended and what was built grows silently, until it becomes a crisis that gets labelled a technical problem by people who would rather not examine the requirements that produced it.

We had seen this happen. We were determined not to repeat it.

The role we created was called Requirements Engineer. Not Product Owner. Not Business Analyst. Not Consultant. Engineer. The word was deliberate.

Every Requirements Engineer in the organization had to understand what their requirements would produce in code. Not write the code — but understand it. They needed to know what an HTTP verb was, and why the choice between GET and POST mattered for state management. They needed to understand what an entity was, and what relationships between entities implied for database design. They needed to be able to read a data model and recognize when two requirements were asking the system to do contradictory things.

Most importantly, they needed to know how to write a scenario. Not a story. A scenario. The difference is the difference between “as a user, I want to see my order history” and “given a customer who has placed three orders in the past ninety days, two of which have been delivered and one of which is in transit, when they navigate to the order history view, then the system must display all three orders in reverse chronological order, with the in-transit order showing its last known logistics status.”

The first is a wish. The second is a contract. Hand the first to ten different development teams, and you will get ten different architectures. Hand them the second, and you get one correct implementation.

This was not a technical distinction. It was an engineering discipline. And like all engineering disciplines, it had to be taught.


The onboarding program we built ran for three weeks. The first week was general: how does this organization work, end to end? How does a product travel from the buying desk to the customer’s door and back again? Where are the system boundaries? What does the data look like at each handoff? Six hours a day. Every new joiner, regardless of role.

The second week was role-specific. Developers went deep into the core systems. Requirements Engineers went deep into scenario writing, entity modeling, and the protocols for resolving contradictions before they reached the codebase. Quality Assurance went into test strategy — not “how to write test cases,” but how to think about coverage, and how to recognize when a test suite provides false confidence.

The third week was in the team.

At the peak of our growth, we were onboarding thirty to forty people per month across multiple locations. The Director of Engineering at the time ran the onboarding program not as an HR formality but as a cultural investment. His phrase, repeated often enough that it became an organizational mantra, was: Stop starting, start finishing.

It sounds simple. It is not. It means: do not move to the next task until the current one is understood, implemented, tested, and documented well enough for someone who was not in the room to continue it. It means the specification must be complete before development begins. It means the question must be asked before the chalk touches the board.

It is, stated differently, exactly what the professor in Magdeburg was trying to teach.


Eight years later, most of the teams built during that period are still intact. Not because we were lucky. Not because the market was kind, or because management was unusually stable, or because the people who stayed were the people who would have stayed anywhere. Because the people in them had been taught, from their first week, that the question was not an obstruction. The question was the work.

That is not a standard organizational outcome. In most organizations, the pressure to start is higher than the pressure to specify. The metric that measures sprint velocity does not reward the Requirements Engineer who spends a week unpicking a contradictory brief before any development begins — it rewards the team that closes the most tickets. The incentive is always toward drawing the line.

We built a system, briefly, that rewarded the question. The teams it produced lasted. The code they generated could be extended without archaeology. The handoffs between systems worked because the boundaries had been specified before the implementation began.

It was not magic. It was not exceptional talent. It was the institutional equivalent of a professor standing at a blackboard, waiting.

The question, when organizations pay for it, turns out to be worth the wait.


3. The Decay

The death of the Requirements Engineer did not announce itself. There was no memo, no industry-wide decision, no moment of conscious choice. There was only a gradual substitution of vocabulary — one that began in the early 2000s and accelerated through the following decade — and by the time most organizations noticed what had been lost, the person who could have explained it had already been reclassified.

The substitution was framed as progress.

The Agile Manifesto was a genuine response to a genuine failure: heavyweight, document-driven methodologies that produced enormous specification artifacts, almost always wrong by the time development began. Its core insight was correct. What happened next, in the hands of large organizations for which the manifesto was not designed, was not.

The Product Owner role was not a renamed Requirements Engineer. It was conceived as a business-empowered decision-maker — someone with genuine authority over the product’s direction, capable of rejecting requirements on commercial grounds without escalation. In the environments where this worked, the Product Owner owned outcomes. They could kill features. They could halt a sprint because the brief was contradictory.

In the enterprise, none of this was true. The model worked where authority matched the role. In large organizations, it rarely did.

The enterprise Product Owner inherited the ceremonial vocabulary without the authority. They could not reject requirements from departments three organizational levels above them. They could not stop a release because the specifications were incomplete. What they could do — what the role was quietly reshaped to do — was write tickets.


This is how the sequence evolved, across large enterprise IT organizations, over roughly two decades:

    The Decay Sequence — Enterprise IT, ~2000–2020
    ─────────────────────────────────────────────────────────────────

    Requirements   →    Product    →    Business    →    Agile
    Engineer            Owner           Analyst          Proxy

    translates          manages a       documents        receives
    domain intent       backlog by      what stake-      requirements
    into system         business        holders say      from above;
    constraints;        priority;       they want;       reformats
    identifies when     writes user     facilitates      them as
    requirements        stories;        alignment        tickets;
    contradict          owns the        workshops;       manages board
    existing            product         does not         columns;
    behavior; writes    vision in       evaluate         closes sprints
    scenarios that      theory, the     technical
    bound the           sprint board    feasibility
    implementation      in practice
    space

    ◀━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━▶
    specification                                               ticket
    discipline                                              throughput

Each step to the right transfers one thing: the obligation to understand what the system must actually do. The Requirements Engineer owned that obligation. The Agile Proxy outsourced it to the development team — who discovered the contradictions not in a specification review, but in a production incident.

There is a forensic artifact that captures this shift precisely. Here is a ticket from an organization that describes itself as an engineering company:

Ticket-ID 4092. Title: Kafka Export. Description: As a system, I want to push a JSON payload to the Kafka topic.

No field mapping. No mandatory fields. No field lengths or value constraints. No trigger condition. No error handling. No endpoint. No credentials. No test scenario. No description of what the payload contains, what the downstream consumer expects, or what the system should do when the topic is unavailable. Someone received a verbal request, opened a ticket, and moved on.

In a functioning engineering culture, this ticket would not survive its first refinement session. The development team would refuse to estimate it. They would send it back with a list of questions longer than the ticket itself.

In the culture that produced it, one of two things happened. Either nobody asked — because the assumption was that someone would look at the old system and figure it out — or the team spent an hour asking, received no answers, and concluded the session with “we’ll take it into the sprint and clarify the details on the fly.” In both cases, the outcome was the same: the specification gap was not closed. It was transferred to the developers, who discovered what the ticket actually meant when the integration failed in a test environment — or, more commonly, in production.

In the language of the previous chapters: someone drew the line without asking in what thickness, at what angle, from where to where.


The cost of this substitution is not most clearly visible in the tickets. It is most clearly visible in what organizations can no longer estimate with any confidence.

Two employees in a non-technical administrative department faced a problem their organization had not solved for them. Over two years, working alongside their primary responsibilities, they built a solution using off-the-shelf productivity tools. It was not architecturally elegant. But it worked. It served the users who needed it.

Years later, the organization decided this solution had to be replaced with one conforming to approved engineering standards. The engineering organization returned with an estimate of several hundred person-days. An alternative implementation — using a mature, widely-supported technology — was proposed at roughly eighty percent lower cost. The response from engineering leadership was brief:

That technology is not on our approved stack. We don’t do that here.

The business department continued running the solution they had built themselves. To do so, they were required to request special approval from senior leadership.

This case deserves a moment of careful reading. The two administrative employees understood the domain completely. They knew which edge cases mattered, which data was unreliable, which failure modes had to be handled. They used the simplest available tools and shipped. The engineering organization that produced the much larger estimate did not have this domain knowledge. A significant fraction of the estimated effort was not solving the user’s problem — it was navigating the organization’s own requirements for how the problem must be solved. The Requirements Engineer role existed precisely to bridge this gap. But that role had been reclassified, and the gap had not closed. It had silently widened into a chasm.


But the decay was not confined to the people writing the tickets. It infected the people receiving them.

In a healthy engineering culture, an incomplete specification is met with resistance from the development team. The engineer refuses to draw the line until the parameters are defined. But when an organization systematically rewards velocity over integrity, the engineer adapts. They stop asking for the parameters. They fill the specification vacuum with assumptions. They write code that satisfies the ticket but fails the system.

This is how you get senior engineers who deploy distributed architectures but cannot prevent a database full-table scan — because they have learned to rely on the framework to hide the complexity, right up until the moment it brings down the production environment. The tools became a substitute for the understanding the tools were meant to accelerate.

The logical conclusion of this dynamic is not just bad code. It is institutional delusion. Consider the following case.


A major engineering organization decided to build a platform to centralize its customer communication capabilities. The project had clear sponsorship and a defined scope. Before the first line of code was written, the governance committee received an explicit warning: the chosen technology was unfamiliar to the available team, and the engineers with the necessary expertise were not available within the planned timeline.

The organization proceeded.

What followed is familiar to anyone who has observed this pattern. The engineering lead left. Crisis meetings multiplied. External consultants were brought in. Rescue proposals were formulated — and rejected. After more than a year of active development, the project was terminated by its sponsors.

No post-mortem was conducted. The lessons were not documented. The organization moved on.

In the silence after the termination, a smaller team — embedded in the business unit rather than the central engineering organization — began answering the users’ outstanding requests. They had not been part of the failed project. They understood the domain. Over the following year, they built the functionality the failed project had promised: compliance with data protection requirements, operational reporting, self-service capabilities. By the time this was complete, the application was stable, actively maintained, and functionally equivalent to what the terminated project had intended to deliver.

The central engineering organization then proposed to replace it.

Not because functionality was missing. Not because the application was unstable or unmaintainable. Not because there was a validated business case for the investment. The stated rationale was that the application had been written in a technology that was not on the approved stack. A parallel rewrite in the organization’s prescribed language would be initiated.

The business unit asked what concrete value this would create for its users.

No answer was provided. The proposal stood.


This is the endpoint of the decay.

It begins with a naming convention — Requirements Engineer becomes Product Owner becomes Agile Proxy. It continues with an estimation process that cannot separate implementation cost from institutional overhead. It accelerates when the engineers receiving the tickets stop asking the questions the ticket writers stopped asking first. And it terminates here: an engineering organization so detached from the purpose of its own work that it will consume resources destroying a functioning, user-serving system — not to improve it, but to enforce a compliance principle that exists for its own sake.

The Agile Manifesto said: working software over comprehensive documentation. The organization that absorbed this principle stripped out the “working” and kept the vocabulary.

The student is drawing the line. The professor left the room years ago. And now someone is proposing to demolish the classroom — to rebuild it in the approved architectural style.


4. The Aspiration Gap

A forensic baseline assessment was once conducted on the architecture and engineering practices of an organization that had publicly committed to becoming a global technology powerhouse — comparable, in its own strategic language, to the engineering organizations that define the industry standard.

The assessment evaluated the organization not against its own prior performance, but against the unforgiving operational reality of the industry’s top tier. The results were not what the slides had suggested.

The evaluators found an organization operating at what they categorized as a modern enterprise level — functional, broadly stable, but multiple levels below the stated ambition. The vocabulary was borrowed from the elite: autonomous teams, platform-led products, a responsibility model built on the principle that those who build systems should also run them. The practices had not followed.

Operational readiness was assessed not by automated quality gates, but by the subjective confidence of the engineering team: “no real measurement… confidence grows as they start small and scale up.” Compliance with backup and recovery requirements was verified by screenshots submitted to a ticketing system. When the costs of forensic logging exceeded the monthly budget, the logging was disabled. And during the organization’s peak operational period — when its systems faced maximum load and reliability mattered most — the organization observed a mandatory code freeze.

A code freeze is not a technical decision. It is a confession. It is the organization acknowledging, formally and calendrically, that it does not trust its own architecture to survive a deployment under load. Organizations with genuine engineering maturity increase deployment velocity during peak periods, because their architecture is designed to fail gracefully and their rollback mechanisms are tested. A code freeze means neither of these things is true.

The assessment gave the organization a score. The organization’s slides described a much higher one.


This is the Aspiration Gap.

It is not a communication failure. It is not a strategy problem. It is a calibration failure: the persistent inability of an organization to measure the distance between what it declares and what it delivers.

The gap has a structural cause, and it is not technical. To understand it, consider a question.

I recently asked a senior manager in an organization that describes itself as a global IT powerhouse what objective, measurable criteria an enterprise architect would need to satisfy to receive a top performance evaluation. The question was not hypothetical — I was asking about criteria against which my own contribution would be assessed.

The answer was not a capability matrix. It was not a list of technical competencies or delivery outcomes. The answer, verbatim, was:

“We are not structured that way here.”

This sentence, spoken without apparent awareness of what it reveals, is the most precise diagnosis of the Aspiration Gap I have encountered. An organization that cannot define what excellent performance looks like for its most senior technical roles cannot build the capability those roles require. It can declare an ambition — Powerhouse — but it cannot describe the intermediate states between the declaration and the destination, cannot measure progress toward them, and cannot recognize excellence when it encounters it.

This is not a failure of any individual manager. It is a failure of the system that placed them in their role without requiring them to answer this question before they accepted it.


The contrast becomes legible only when you have seen both sides.

The Director of Engineering described in the previous chapter could answer the question I asked. He could describe, in precise terms, what an excellent Requirements Engineer produced, what the difference was between someone who understood a system and someone who had memorized its documentation, and what the delta looked like between a team that was growing and one that was stagnating. He could make these distinctions because he had built things himself, mentored people through the process, and accumulated enough domain depth to recognize the difference between specification and guesswork.

The mentor knows what finished looks like. This is not a trivial capability. To demand that work be completed before new work begins, you must be able to recognize when work is complete. To say stop starting, start finishing, you must hold a definition of finished that is more precise than “the ticket is in Done.”

The manager who cannot define what excellent performance looks like is not a villain. They are a Verwalter — an administrator. They can track whether the process was followed. They can confirm that the ticket moved through the correct board columns. They can report on velocity. What they cannot tell you is whether the system being built is sound, whether the specification is complete, or whether the engineers on their team are growing or stagnating.

The Verwalter is not equipped to recognize capability debt. The debt is invisible to them, because recognizing it requires exactly the capability they do not have.

This is the Capability Debt: the accumulated distance between what an organization declares its people must be able to do, and what it has actually invested in teaching them to do.

And when the Verwalter attempts to execute a mentor’s slides — “Worldclass.” “IT-Powerhouse.” “You build it, you run it” — they produce the specific dysfunction that the forensic assessment found: an organization that speaks the vocabulary of elite engineering without having built the foundation it rests on.


Why is this tolerated in software engineering, when no equivalent tolerance exists in fields that also build things?

My father’s employer — a national railway infrastructure organization — required him to complete more than six months of specialized technical training before he was permitted to work on or near operational rail infrastructure. Signaling systems. Overhead line technology. Track engineering. Written and oral examinations at each stage. He referred to it, with mild irritation, as doing a second degree. He passed. Then he was allowed on the tracks.

You can complain about the railways, he said later. But their engineers knew what they were doing.

An electrician cannot legally perform installation work without a professional certification. A structural engineer cannot authorize a building design without a licensed stamp. A surgeon cannot operate without a medical license. The assumption underlying all of these requirements is the same: when people build things that other people must trust, the burden of proof for competence precedes the authorization to work.

Software engineering is approximately sixty years old as a professional discipline. Civil engineering has had millennia to develop its standards, codify its failures into regulation, and institutionalize the expectation that builders must prove their capability before they are authorized to practice it. Software has had decades. It has not yet developed the scar tissue — in part because, until now, the cost of the failure was borne by the user, not the builder.

The result is that a software engineer can be promoted to a senior role at a self-described world-class engineering organization, handed responsibility for critical production systems, and asked to define the architecture of customer-facing platforms — with no certification requirement, no standardized competency framework, and no formal verification that they understand the systems they are being asked to build.

A senior executive at one such organization made this observation explicitly: “Just because we call ourselves engineers doesn’t mean we are.” The remark was made in passing, in the context of a broader strategic discussion. It was an accurate diagnosis treated as a rhetorical flourish. No action followed.


The Aspiration Gap is not, in the end, a technology problem. It is an accountability problem.

An organization can declare any ambition it chooses. The declaration costs nothing — not in budget, not in organizational pain, not in the friction of telling capable people that they are not yet capable enough. What costs, and what requires exactly the kind of leadership that can define finished, recognize excellence, and absorb the discomfort of measuring the distance between the aspiration and the reality, is closing the gap.

The mentor knows this distance. The Verwalter does not know that it exists.

And until an organization can measure the gap, it cannot pay down the debt.


5. The Intent as the Last Differentiator

The student who drew the line without asking for the specification was at least slow. A human hand holding a piece of chalk can draw perhaps one misspecified line per second. The misspecification can be caught. The professor can intervene. The error is recoverable.

This constraint has been removed.

Generative AI and autonomous code agents now produce tens of thousands of lines of code in response to a specification of approximately the same quality as the ticket described in the previous chapters. Title: Kafka Export. Description: As a system, I want to push a JSON payload to the Kafka topic. Hand this to a sufficiently capable code agent, and it will produce a working implementation in under a minute. It will make every unstated assumption consistently and confidently. It will not ask for the parameters. It does not know that there are parameters to ask for.

The student is drawing the line at industrial speed.

This is not a failure of the technology. The model does exactly what it was built to do: translate a prompt into code. The failure is in the specification. The specification has always been insufficient. At human development speed, this insufficiency was partially absorbed by the developer’s domain knowledge, the engineering team’s implicit understanding of the system, and the slow accumulation of corrections over sprint cycles. At AI speed, every implicit assumption is amplified at the rate of code generation. The organization’s capability debt — the gap between what was intended and what was specified — scales with the velocity.

The organizations that understood this early have begun to make a quiet but consequential shift: away from optimizing for the speed of implementation, toward optimizing for the precision of intent. In a world where implementation is largely automated, the specification is no longer an input to the development process. It is the development process. Everything else is execution.

In the AI era, the only moat is the quality of the question. Machines have infinite answers. Humans must supply the right constraints.


This changes the profile of the person whose work matters most.

For most of the industry’s history, the highest-value engineering role was the one closest to the code: the developer who understood the runtime, the architect who could hold the full system in their head, the engineer who could debug the production incident at two in the morning. These capabilities remain valuable. They are not, however, the capabilities that become scarce as code generation is automated.

What becomes scarce is the capability that was always the foundation: the ability to formulate a complete, unambiguous, machine-evaluable description of what a system must do, under what conditions, with what constraints, and with what defined behavior at the boundary cases. The ability to ask the question the professor was waiting for — not here is the line I drew, but here is why I drew it at this angle, in this thickness, from this point to that one, under this load, for this purpose.

This is not a business analyst function. It is not a project management function. It is the hardest technical discipline in the field, because it requires understanding both the domain and the system — the user’s implicit knowledge and the implementation’s explicit constraints — and producing a specification that bridges them without loss of fidelity in either direction.

The organizations that are still training this capability — that treat the formulation of precise intent as the highest engineering discipline — will enter the AI era with a structural advantage. The organizations that reclassified their Requirements Engineers and rewarded the speed of ticket closure will enter it with a structural deficit that no code agent can compensate for.


An external enforcement mechanism will accelerate this reckoning.

The EU Product Liability Directive, transposed into member state law by late 2026, extends product liability to software for the first time in European legal history. When that product causes harm, the burden of proof reverses. It is no longer sufficient to argue that reasonable care was taken. The producer must demonstrate that the system operated as intended — that the intent was documented, that the implementation was verified against it, and that any deviation was known and accepted.

The professor has become a judge. The question is no longer pedagogical.

Can you prove why you drew the line exactly there? In that thickness? Under that load? With that boundary behavior?

If you cannot, you are liable.

For organizations that have replaced their specification discipline with user stories, accepted “confidence grows” as their operational readiness criterion, and verified compliance with screenshots submitted to a ticketing system, this is not a process improvement. It is a reckoning. The capability debt that the Verwalter could not see is now enumerated on the balance sheet.

For those who have maintained the discipline of documented intent, this will feel like confirmation. For the rest, it will feel like a deadline.


The question this raises is not whether the discipline is necessary. It is whether the industry is capable of rebuilding it in time.

The instinct is to look for a structural solution — a certification body, a regulatory requirement, a new job title. These are not wrong, but they are not sufficient. The industry spent sixty years failing to institutionalize competence requirements that physical engineering fields built over millennia. A new regulation does not produce the mentors who can teach what the regulation requires.

There is, however, a mechanism the industry already possesses and has largely failed to use: the tools that code travels through on its way to production.

My father could not touch an operational railway track until he had passed six months of examinations. The railway did not trust the declaration of competence. It required proof. The examination was the gate.

The software industry has built no equivalent gate. Deploying code to production systems trusted by millions of users requires, in most organizations, a ticket in the correct column. The code is tested for defects. It is scanned for known vulnerabilities. It is not examined for the completeness of its specification or the coherence of its architectural intent.

This can change. Not through credentialing, but through the tools.

The CI/CD pipeline has been, for most of its existence, a transport mechanism: a conveyor belt that moves code from development to production. This is necessary. It is not sufficient. The pipeline can be the academy. It can be the examination that the railway required — not in a post-mortem after the incident, but before the code touches anything a user depends on. It can ask the question the professor in Magdeburg was waiting to hear: not here is the line, but here is the specification for the line — its purpose, its constraints, its defined behavior when the upstream system is unavailable.

The ticket that generated the incident cannot pass this gate. It does not contain the specification. The specification must be supplied, or the code does not proceed.

This is not a documentation requirement in the conventional sense. Documentation after the fact is the screenshot in the ticketing system — evidence of a completed gesture, not a verified intent. The pipeline as academy requires evidence before the act: a machine-evaluable record of what the system is supposed to do and why, against which the implementation can be tested and, if necessary, defended.


Engineering has always been the acceptance of responsibility for what is built. Not the typing of syntax. Not the closing of tickets. The acceptance of responsibility for the system’s behavior under load, at the boundary cases, in the conditions the specification did not anticipate.

What generative AI has done is not replace this responsibility. It has clarified it. When the implementation is automated, the only irreducibly human contribution is the judgment that produces the specification. The question is the last thing that cannot be delegated.

Producing engineers who can ask this question — who treat the formulation of intent as the hardest and most valuable part of their work — requires exactly the kind of institutional investment that the previous chapters described and that most organizations have declined to make. It requires mentors who know what finished looks like. It requires onboarding programs that teach the domain before the toolchain. It requires a definition of engineering that begins with responsibility and ends with code, rather than the other way around.

The Verwalter will not build this. They cannot see what is missing, because what is missing is the capability to see it.

But some organizations will. Some already are. And when the regulatory and technological pressures of the next decade resolve, the distance between those organizations and the ones that kept optimizing for velocity will be the most consequential capability gap in the industry’s short history.

The question is the only moat. And it has always been available to anyone willing to ask it.


6. Das Bootcamp — Epilog

By nine o’clock in the morning, the room held fourteen people who did not want to be there.

Not hostile. Not disengaged. Anxious. The kind of anxious that comes from being told you are spending three days learning something you were not aware you needed to learn, in a domain that does not quite feel like yours, assessed by colleagues you do not know. The mood meter we ran at the start of each session — a simple question, answered anonymously — confirmed what the body language already showed. Anxious. Uncertain. Some: a little defensive.

The bootcamp was not an official program. It had no budget line, no sponsor above the director level, no formal mandate. Two colleagues had looked at the gap between what the business consulting function was being asked to do — translate business requirements into technical systems, in an organization undergoing a large-scale cloud migration — and what its members were equipped to do, and decided the gap was too wide to leave unaddressed. We built a curriculum. We ran the first session. Then the second. Then the third. By the fourth iteration, the waitlist was longer than the room.


On the first morning of the fourth session, a participant introduced himself. He worked in data center construction. Physical infrastructure — cabling, power, cooling, the unglamorous machinery that software runs on. He was curious, he said, but uncertain about the relevance of what we would cover.

A few seats away, another colleague echoed the sentiment, summarizing his own role with a sentence I wrote down immediately:

My responsibility ends at the Jira ticket.

This is the natural conclusion of everything the previous chapters describe. Not an abdication. A learned behavior. In an organization that measures throughput and rewards closure, the rational response is to define your contribution as the act of moving a ticket from one column to the next. The alternative — asking what the ticket is actually asking for, whether the specification is complete, whether the implementation will serve the user who needs it — is unrewarded, and in some contexts actively penalized.

We told him: that is exactly where we start.


By mid-morning, the room was working through a question about system diagrams. One participant from the data center team found the variety of diagram types disorienting. In a physical construction project, the documentation follows a clear progression: first a sketch, then a site plan, then a floor plan, then the technical drawings for electrical, plumbing, and structural systems. The sequence is understood. Each layer of detail follows the previous one. But in software documentation, the range of diagram types seemed to him like an open question with no right answer — a variance he experienced as a lack of discipline.

I asked him to describe the construction sequence again.

He did. The sketch. The site plan. The floor plan. The technical drawings.

I told him: in software, we have the same sequence. A high-level concept diagram. A system context. An architecture diagram showing the components and their relationships. A sequence diagram for the technical interactions. A data model for the storage layer. The diagram types are different. The logic is the same: each layer specifies what the previous layer left implicit.

He paused. Then: “So it’s not that different.”

It is not that different. The physical world has ten thousand years of precedent for this kind of thinking. Software has sixty. The vocabulary is newer. The discipline is the same.


By five o’clock, the mood meter read differently. Grateful. Relieved. Inspired. Not everyone. But most.

One participant said it was the first time in three years of working in a technical environment that someone had explained what the technical people were actually doing and why. Not the tools. The thinking. Another said he had been writing tickets for two years and had never understood why the developers kept coming back to him with questions that he thought his tickets had answered. After one afternoon with a worked example of what a complete specification looked like versus an incomplete one, he understood.

When a colleague asked me at the end of the day why we had taken the initiative to build this, I gave him the most honest answer I had: “Because we see a capability gap. Between what the organization needs people to be able to do, and what most of them currently know how to do.”

He nodded. It was, he said, a more honest answer than he had expected.

The session ended. We packed up. Fourteen people left the room with something they had not brought in: a framework for asking the question.


The following month, the program was handed to a central administrative function to coordinate. In the process, the three-day curriculum was reduced to two days.

The reason given: too technical.

The feedback from every participant across four iterations had been consistent: the program was valuable precisely because it was technical. The participants who arrived anxious about the technical content left grateful for it. The moment of understanding — for the data center engineer, for the ticket writer who finally saw the gap between request and specification — required encountering the technical reality, not a simplified version of it.

The organization that could not define what excellent performance looked like, that could not specify the criteria for a top evaluation, that observed a code freeze because it did not trust its own architecture — that same organization looked at a curriculum that was demonstrably working and removed the part it found uncomfortable.

A day was cut. The Verwalter does not see the gap. Even when the gap is being closed in front of them, they see the schedule.


I do not tell this to assign blame, or to conclude that the effort was wasted. Fourteen people, per session, leave with something they did not have. That is not nothing. In the previous chapter, I argued that some organizations are already doing this work — building the institutional capacity to ask the question, paying down the debt through mentorship and onboarding and the deliberate cultivation of specification discipline.

This is what that looks like in practice. It is not a strategic initiative. It is two colleagues, a borrowed room, and a mood meter at nine in the morning.

It is slow. It is insufficient at scale. It is exactly the kind of work the organizations that survive the next decade will have been doing, quietly, while the others were optimizing for velocity.


The professor in Magdeburg waited an hour for a question that never came. My father carried the misunderstanding for fifty years. We sat at a breakfast table and talked it through, and something that had been closed for half a century opened, briefly, in the space between two sentences.

Draw a line is not a command. It is a test.

The answer an engineer gives does not define the line. It defines the engineer.


Sources & Further Reading

EU Product Liability & Regulation

Software Methodology & Engineering Culture

  • Beck, K. et al.: Manifesto for Agile Software Development (2001) – The foundational document whose enterprise adoption and subsequent misapplication is traced in Chapter 3.
  • Wiegers, K. & Beatty, J.: Software Requirements (3rd ed., 2013) – The closest thing the industry has to a canonical reference for the discipline that Chapter 2 describes and Chapter 3 chronicles the disappearance of.
  • Brooks, F.: The Mythical Man-Month (1975) – On the irreducibility of conceptual integrity in software; the intellectual precursor to the specification discipline this essay argues must be recovered.
  • DeMarco, T. & Lister, T.: Peopleware: Productive Projects and Teams (1987) – On the organizational and human factors that determine whether engineering cultures thrive or decay.
  • Felix Radzanowski: The Syntax of Dissent (2026) – On the architect as guardian of systemic integrity and the cost of organizational silence.
  • Felix Radzanowski: The Debt of Decision (2026) – On the economics of architectural survival, the EU Product Liability Directive as a forcing function, and why the only currency that matters in the age of AI-generated code is documented intent.