Every few months, another headline declares that AI agents are coming for software engineers. The tone is always apocalyptic: junior devs will be obsolete, code generation will automate whole teams, and the only safe skill is prompt engineering. A new paper from researchers at Chalmers University of Technology and the Volvo Group takes an axe to that narrative, and it's about time someone did. Their argument is both intellectually honest and practically urgent: AI agents aren't replacing software engineering—they're expanding it into territory that code alone could never reach.
The core insight is beautifully simple. Software engineering has always been about creating executable artifacts—things that reliably produce a behavior when run. The researchers propose a "semi-executable stack" of six rings, starting with deterministic code at the center and radiating outward through prompts, agent workflows, guardrails, organizational decision routines, and finally societal and regulatory frameworks like the EU AI Act. Everything beyond ring 1 is semi-executable: it shapes behavior, but its execution depends on human interpretation or probabilistic inference. The further out you go, the more the system's success depends on things that can't be fully automated.
What matters here is the shift in where the value lies. For decades, the hardest engineering problems lived inside ring 1: making loops faster, databases more consistent, compilers more clever. Now, the bottleneck has moved outward. The scarcer skill is no longer writing code that compiles—it's deciding what to build, how to validate that the whole stack works together, and how to govern systems whose behavior changes with every prompt tweak. The paper calls this out directly: "The scarce skill shifts from building faster to deciding what is worth building or changing, which ring is actually being changed, how that change will be validated, how it will be governed, and how it will be maintained over time." That's not a deskilling prophecy. That's a promotion.
Let's be blunt about why this matters for practitioners right now. The teams that treat AI as a faster compiler will see local productivity gains and then watch their maintenance costs explode. Prompt drift, hallucination cascades, and ambiguity in agent orchestration all demand the same discipline we apply to code—testing, monitoring, version control, and traceability. The researchers are right to reframe common objections as engineering tasks. When an agent hallucinates, you don't abandon the technology; you build a better guardrail. When code velocity increases, you invest in feedback loops, not just output. The engineering mindset doesn't disappear; it expands to cover new kinds of artifacts.
My own take is sharper. The most dangerous thing you can do right now is treat AI agents as drop-in replacements for junior engineers. That's a failure of imagination, not a pragmatic cost cut. The real opportunity is to let AI handle the lower rings—code generation, basic testing, documentation—while engineers focus on the outer rings that actually determine system success. That means understanding organizational decision-making, designing escalation policies, and navigating regulatory constraints. Those are not soft skills; they are software engineering skills in a new domain. The engineers who embrace this will become invaluable. The ones who keep trying to optimize only ring 1 will become commodities.
The paper also makes a subtle but powerful observation about scale. AI doesn't need to match the best human engineer to transform a team; it just needs to be good enough for the long tail of routine tasks it can handle. And as domain experts start building their own systems using natural language, the need for clean engineering practices—testing, monitoring, governance—grows rather than shrinks. The messy reality of semi-executable artifacts demands more rigor, not less.
This is a genuinely optimistic vision of the future of our profession. Yes, the role changes. But it changes toward more strategic, more impactful work. The six rings model gives us a language to talk about that transition, and it exposes the huge gap in tooling for the outer rings. We have decades of investment in IDE's, CI/CD pipelines, and test frameworks for code. We have almost nothing equivalent for prompts, workflows, or decision routines. That gap is not a problem to fear—it's a greenfield for the next generation of engineers who understand that writing code was always just the beginning.