The question of when you should not use it is more interesting than the naming debate. Agentic coding works well when the solution space is constrained and verification is cheap: existing tests cover the behavior, the output format is clear, and you can tell whether the result is correct by inspection. CRUD endpoints, data transformations, boilerplate integration code.
Where it breaks down is any task where you discover the requirements during implementation. Most hard engineering problems are like this -- you start building, realize the data model is wrong, reshape the abstraction, and iterate. An agent can execute your architecture, but it can't tell you your architecture is the wrong one. That judgment still requires someone who understands the domain deeply enough to notice when the code is solving the wrong problem correctly.
The name matters less than recognizing this boundary. Call it agentic engineering or agentic coding, the skill is knowing which tasks to hand to the agent and which to think through yourself first.
I find it quite useful to work out, plan, and refine ideas with agents. An agents ability to call out approaches you haven't thought of is really powerful. I find it useful to steer their feedback and proposals to the exact constraints you have or give yourself a confidence check on if you are leaning toward a solution for the right reasons. The best is when you can test 2-3 avenues and being able to come back and evaluate the results. Normally you would commit to one and spend all your time on that approach, make an assessment it was bad enough to try something else and move on. I find agents completely flip the script on research and planning. I find I am better able to work on hard problems then ever before with these tools. I think people severely limit themselves if they are only using them at the "build it" phase.
Honestly? I've been playing with using LLMs specifically for that reason. I'm far more likely to make prototypes that I specifically intend to throw away during the development process.
I try out ideas that are intended to explore some small aspect of a concept, and just ask the LLM to generate the rest of whatever scaffold is needed to verify the part that I'm interested in. Or use an LLM to generate just a roughest MVP prototype you could imagine, and start using it immediately to calibrate my initial intuition about the problem space. Eventually you get to the point where you've tried out your top 3-5 ideas for each different corner of your codebase, and you can really nail down your spec and then its off to the races building your "real" version.
I have a mechanical engineering background, so I'm quite used to the concept of destructive validation testing. As soon as I made that connection while exploring a new idea via claude code, it all started feeling much more natural. Now my coding process is far more similar to my old physical product design process than I'd ever imagined it could be.
I don't think we should be making this distinction. We're still engaged in software engineering. This isn't a new discipline, it's a new technique. We're still using testing, requirements gathering, etc. to ensure we've produced the correct product and that the product is correct. Just with more automation.
Yeah, I see agentic engineering as a sub-field or a technique within software engineering.
I entirely agree that engineering practices still matter. It has been fascinating to watch how so many of the techniques associated with high-quality software engineering - automated tests and linting and clear documentation and CI and CD and cleanly factored code and so on - turn out to help coding agents produce better results as well.
I agree, partly. I feel the main goal of the term “agentic engineering” is to distinguish the new technique of software engineering from “Vibe Coding.” Many felt vibe coding insinuated you didn’t know what you were doing; that you weren’t _engineering_.
In other words, “Agentic engineering” feels like the response of engineers who use AI to write code, but want to maintain the skill distinction to the pure “vibe coders.”
My preferred definition of software engineering is found in the first chapter of Modern Software Engineering by David Farley
Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software.
As for the practitioner, he said that they:
…must become experts at learning and experts at managing complexity
Modularity
Cohesion
Separation of Concerns
Abstraction
Loose Coupling
Anyone that advocates for agentic engineering has been very silent about the above points. Even for the very first definition, it seems that we’re no longer seeking to solve practical problems, nor proposing economical solutions for them.
That definition of software engineering is a great illustration of why I like the term agentic engineering.
Using coding agents to responsibly and productively build good software benefits from all of those characteristics.
The challenge I'm interested in is how we professionalize the way we use these new tools. I want to figure out how to use them to write better software than we were writing without them.
I’ve read the chapter and while the description is good, there’s no actual steps or at least a general direction/philosophy on how to get there. It does not need to be perfect, it just needs to be practical. Then we could contrast the methodology with what we already have to learn the tradeoffs, if they can be combined, etc…
Anything that relates to “Agentic Engineering” is still hand-wavey or trying to impose a new lens on existing practices (which is why so many professionals are skeptical)
ADDENDUM
I like this paragraph of yours
We need to provide our coding agents with the tools they need to solve our problems, specify those problems in the right level of detail, and verify and iterate on the results until we are confident they address our problems in a robust and credible way.
There’s a parallel that can be made with Unix tools (best described in the Unix Power Tools) or with Emacs. Both aim to provide the user a set of small tools that can be composed and do amazing works. One similar observation I made from my experiment with agents was creating small deterministic tools (kinda the same thing I make with my OS and Emacs), and then let it be the driver. Such tools have simple instructions, but their worth is in their combination. I’ve never have to use more than 25 percent of the context and I’m generally done within minutes.
There should be more willingness to have agents loudly fail with loud TODOs rather than try and 1 shot everything.
At the very least, agentic systems must have distinct coders and verifiers. Context rot is very real, and I've found with some modern prompting systems there are severe alignment failures (literally 2023 LLM RL levels of stubbing out and hacking tests just to get tests "passing"). It's kind of absurd.
I would rather an agent make 10 TODO's and loudly fail than make 1 silent fallback or sloppy architectural decision or outright malicious compliance.
This wouldn't work in a real company because this would devolve into office politics and drudgery. But agents don't have feelings and are excellent at synthesis. Have them generate their own (TEMPORARY) data.
Agents can be spun off to do so many experiments and create so many artifacts, and furthermore, a lot more (TEMPORARY) artifacts is ripe for analysis by other agents. Is the theory, anyways.
The effectively platonic view that we just need to keep specifying more and more formal requirements is not sustainable. Many top labs are already doing code review with AI because of code output.
The term feels broken when adhering to standard naming conventions, such as Mechanical Engineering or Electrical Engineering, where "Agentic Engineering" would logically refer to the engineering of agents
Yeah, Armin Ronacher has been calling it "agentic coding" which does at least make it clear that it's not a general engineering thing, but specifically a code related thing.
I think “agent engineering” could refer to the latter, if a distinction needs to be made. I do get what you’re saying, but when I heard the term, I personally understood its meaning.
I’ve been using the term “agentic coding” more often, because I am always shy to claim that our field rises to the level of the engineers that build bridges and rockets. I’m happy to use “agentic engineering” however, and if Simon coins it, it just might stick. :)
Thanks for sharing your best practices, Simon!
I decided to go with it after z.AI used it in their GLM-5 announcement: https://z.ai/blog/glm-5 - I figured if the Chinese AI labs have picked it up that's a good sign it's broken out.
The bounded vs unbounded distinction is spot on. In my experience, the
real unlock with agents isn't single-agent capability — it's running
multiple agents on independent tasks in parallel. One agent refactoring
module A while another writes tests for module B. The constraint is
making sure tasks are truly independent, which forces you to think about
architecture more carefully upfront.
Is there any article explaining how AI tools are evolving since the release of ChatGPT? Everything upto MCP makes sense to me - but since then it feels like there is not clear definition on new AI jergons.
I think there is a meaningful distinction here. It's true that writing code has never been the sole work of a software engineer. However there is a qualitative difference between an engineer producing the code themselves and an engineer managing code generated by an LLM. When he writes there is "so much stuff" for humans to do outside of writing code I generally agree and would sum it up with one word: Accountability. Humans have to be accountable for that code in a lot of ways because ultimately accountability is something AI agents generally lack.
I think within the industry and practice there's going to be a renewed philosophical and psychological examination of exactly what accountability is over the next few years, and maybe some moral reckoning about it.
What makes a human a suitable source of accountability and an AI agent an unsuitable one? What is the quantity and quality of value in a "throat to choke", a human soul who is dependent on employment for income and social stature and is motivated to keep things from going wrong by threat of termination?
Sure, you could argue it's like writing code that gets optimized by the compiler for whatever CPU architecture you're using. But the main difference between layers of abstraction and agentic development is the "fuzzyness" of it. It's not deterministic. It's a lot more like managing a person.
Agentic engineering is working from documentation -> code and automating the translation process via agents. This is distinct from the waterfall process which describes the program, but not the code itself, and waterfall documentation cannot be translated directly to code. Agent plans and session have way more context and details that are not captured in waterfall due to differences in scope.
I think prompt engineering is obsolete at this point, partly because it's very hard to do better than just directly stating what you want. Asking for too much tone modification, role-playing or output structuring from LLMs very clearly degrades the quality of the output.
"Prompt engineering" is a relic of the early hypothesis that how you talk to the LLM is gonna matter a lot.
Prompt engineering didn't imply coding agents. That's the big difference: we are now using tools write and execute the code, which makes for massively more useful results.
Prompt engineering was coined before tooling like Claude Code existed, when everyone copied and pasted from chatgpt to their editor and back.
Agentic coding highlights letting the model directly code on your codebase. I guess its the next level forward.
I keep seeing agentic engineering more even in job postings, so I think this will be the terminology used to describe someone building software whilst letting an AI model output the code. Its not to be confused with vibe coding which is possible with coding agents.
I mean agents as concept has been around since the 70s, we’ve added LLMs as an interface, but the concept (take input, loop over tools or other instructions, generate output) are very very old.
Claude gave a spot on description a few months back,
The honest framing would be: “We finally have a reasoning module flexible enough to make the old agent architectures practical for general-purpose tasks.” But that doesn’t generate VC funding or Twitter engagement, so instead we get breathless announcements about “agentic AI” as if the concept just landed from space.
Where it breaks down is any task where you discover the requirements during implementation. Most hard engineering problems are like this -- you start building, realize the data model is wrong, reshape the abstraction, and iterate. An agent can execute your architecture, but it can't tell you your architecture is the wrong one. That judgment still requires someone who understands the domain deeply enough to notice when the code is solving the wrong problem correctly.
The name matters less than recognizing this boundary. Call it agentic engineering or agentic coding, the skill is knowing which tasks to hand to the agent and which to think through yourself first.
I try out ideas that are intended to explore some small aspect of a concept, and just ask the LLM to generate the rest of whatever scaffold is needed to verify the part that I'm interested in. Or use an LLM to generate just a roughest MVP prototype you could imagine, and start using it immediately to calibrate my initial intuition about the problem space. Eventually you get to the point where you've tried out your top 3-5 ideas for each different corner of your codebase, and you can really nail down your spec and then its off to the races building your "real" version.
I have a mechanical engineering background, so I'm quite used to the concept of destructive validation testing. As soon as I made that connection while exploring a new idea via claude code, it all started feeling much more natural. Now my coding process is far more similar to my old physical product design process than I'd ever imagined it could be.
I entirely agree that engineering practices still matter. It has been fascinating to watch how so many of the techniques associated with high-quality software engineering - automated tests and linting and clear documentation and CI and CD and cleanly factored code and so on - turn out to help coding agents produce better results as well.
In other words, “Agentic engineering” feels like the response of engineers who use AI to write code, but want to maintain the skill distinction to the pure “vibe coders.”
Using coding agents to responsibly and productively build good software benefits from all of those characteristics.
The challenge I'm interested in is how we professionalize the way we use these new tools. I want to figure out how to use them to write better software than we were writing without them.
See my definition of "good code" in a subsequent chapter: https://simonwillison.net/guides/agentic-engineering-pattern...
Anything that relates to “Agentic Engineering” is still hand-wavey or trying to impose a new lens on existing practices (which is why so many professionals are skeptical)
ADDENDUM
I like this paragraph of yours
We need to provide our coding agents with the tools they need to solve our problems, specify those problems in the right level of detail, and verify and iterate on the results until we are confident they address our problems in a robust and credible way.
There’s a parallel that can be made with Unix tools (best described in the Unix Power Tools) or with Emacs. Both aim to provide the user a set of small tools that can be composed and do amazing works. One similar observation I made from my experiment with agents was creating small deterministic tools (kinda the same thing I make with my OS and Emacs), and then let it be the driver. Such tools have simple instructions, but their worth is in their combination. I’ve never have to use more than 25 percent of the context and I’m generally done within minutes.
That's what the rest of the guide is meant to cover: https://simonwillison.net/guides/agentic-engineering-pattern...
At the very least, agentic systems must have distinct coders and verifiers. Context rot is very real, and I've found with some modern prompting systems there are severe alignment failures (literally 2023 LLM RL levels of stubbing out and hacking tests just to get tests "passing"). It's kind of absurd.
I would rather an agent make 10 TODO's and loudly fail than make 1 silent fallback or sloppy architectural decision or outright malicious compliance.
This wouldn't work in a real company because this would devolve into office politics and drudgery. But agents don't have feelings and are excellent at synthesis. Have them generate their own (TEMPORARY) data.
Agents can be spun off to do so many experiments and create so many artifacts, and furthermore, a lot more (TEMPORARY) artifacts is ripe for analysis by other agents. Is the theory, anyways.
The effectively platonic view that we just need to keep specifying more and more formal requirements is not sustainable. Many top labs are already doing code review with AI because of code output.
From Kai Lentit’s most recent video: https://youtu.be/xE9W9Ghe4Jk?t=260
What makes a human a suitable source of accountability and an AI agent an unsuitable one? What is the quantity and quality of value in a "throat to choke", a human soul who is dependent on employment for income and social stature and is motivated to keep things from going wrong by threat of termination?
Spot on.
https://news.ycombinator.com/item?id=47243272
"Prompt engineering" is a relic of the early hypothesis that how you talk to the LLM is gonna matter a lot.
Agentic coding highlights letting the model directly code on your codebase. I guess its the next level forward.
I keep seeing agentic engineering more even in job postings, so I think this will be the terminology used to describe someone building software whilst letting an AI model output the code. Its not to be confused with vibe coding which is possible with coding agents.
Not saying that AI doesn't have a place, and that models aren't getting better, but there is a seriously delusional state in this industry right now..
But to your point I think this year it's quite likely we'll see at least 1 or 2 major AI-related security incidents..
Claude gave a spot on description a few months back,
The honest framing would be: “We finally have a reasoning module flexible enough to make the old agent architectures practical for general-purpose tasks.” But that doesn’t generate VC funding or Twitter engagement, so instead we get breathless announcements about “agentic AI” as if the concept just landed from space.
If you believe coding agents produce working code, why was the decision below made?
0 - https://www.businessinsider.com/amazon-tightens-code-control...