Jul 21, 2025
You Don't Need An Agent To Be Agentic
Jason Rebholz

Gartner coined the term “agent washing” to highlight how marketers are rebranding existing products and pushing the Agentic AI narrative harder than Billy Mays pushed cleaning products.
They estimated that out of the thousands of Agentic AI vendors, only 130 are real. Billy would be proud, but it puts organizations looking to build with AI in an awkward spot.

While AI agents sound sexy, and they are, they aren’t necessary for the vast majority of problems that companies are trying to solve. It likely would make things far worse for companies to go the agentic route because it’s a more complex solution. It’s the equivalent of buying a bus to drive yourself to work. Sure, it will get you there, but it’s far more costly and complicated than just buying a normal car like the rest of us.
But fear not, you can still use AI to solve your problems. That’s why when Salesforce says that 50% of their work is done by AI, it doesn’t mean it’s with agents. You can start with basic workflows and then work your way up to more complicated agent-based solutions.
Thankfully, Anthropic has given us an agentic system path to understand the stepping stones towards agents. Let’s explore…

First, some terminology that will clear up a lot of confusion. There are two architectural types:
Workflows: Per Anthropic, these are “systems where LLMs and tools are orchestrated through predefined code paths.” This is far easier and more predictable. This is where you need to start.
Agents: Per Anthropic, these are “systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.” As you can tell, there’s more latitude given to the agents here, which means more things that can go wrong.
I said before, don’t overcomplicate a solution to your problem. If you don’t want to listen to me, then listen to Anthropic when they say “find the simplest solution possible.”
“When building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all. Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense.” Anthropic’s blog post on building effective agents
With all of that out of the way, let’s explore Anthropic’s common patterns for agentic systems.
The Foundation: Augmented LLM
Quite simply, this is an LLM with additional support like access to additional knowledge bases (e.g., retrieval-augmented generation [RAG]), access to tools (e.g., MCP servers to access different tools), and memory (e.g., the ability to retain specific information for improved future performance).

Anthropic’s blog post on building effective agents
Taking the concept of augmented LLMs, we can then look at basic LLM workflows that incorporate various techniques. Each technique gets more complicated as we go along, from basic building blocks to more advanced workflows, ultimately leading to agents.
Prompt-Chaining: The type A of workflows, this breaks a task down into sequential tasks organized and logical steps, with each step building on the last. It can include gates where you can verify the information before going through the entire process.

The task master of workflows, this breaks down complex tasks into different categories and assign those to specialized LLMs that are best suited for the task. Just like you don’t want to give an advanced task to an intern or a basic task to a senior employee, this find the right LLM for the right job.

Parallelization: The multi-tasker workflow, this separates tasks across multiple LLMs and then combines the outputs. This is great for speed, but also collects multiple perspectives from different LLMs to increase confidence in the results.

Orchestrator-workers: The middle manager of the workflows, this has an LLM that breaks down the tasks and delegates them to other LLMs, then synthesizes their results. This is best suited for complex tasks where you don’t quite know what subtasks are going to be needed.

Evaluator-optimizer: The peer review of workflows, this uses an LLM to generate a response while another LLM evaluates and provides feedback in a loop until it passes muster.

And finally…AI Agents! Let’s start with the use case. Agents are best for open-ended problems. Not the open-ended problem of your cousin who never pays you back. These are problems that are difficult or impossible to predict the exact number of required steps. It’s the classic example of where you need a human to monitor the results, assess what else is going on, and then make the best decision for the situation. And while it will still make mistakes, just like that one time in college, it acts on its own accord and does its best.
To summarize Anthropic’s take on agents, an agent has the following capabilities:
Understand complex inputs
Engage in reasoning and planning
Reliably use tools
Recover from errors
While the agent is working, it can obtain “ground truth” from the environment to validate its progress and can also pause for human feedback or confirmation to proceed, something that should be mandatory for any critical task or while accessing sensitive information.

Of course, I’m going to have a security spin on this. When you’re building agentic workflows or agents, it’s critical to have visibility into what is happening in each LLM call and response. Even the most structured workflows aren’t infallible. This is especially important for any public-facing AI applications.
While the risks exist today, the impacts still appear small. But we said similar things about the Internet…and cloud…and well, here we are still cleaning up the mess.
There is a lot of potential in deploying AI in your environment, whether you build it yourself or you bring in third-party solutions that aren’t agent-washed. No security team should live in the land of “no” and outright block AI implementation.
It’s about intentional deployment.
Understand the problem you are trying to solve (and I mean REALLY understand it)
Identify the most simplistic solution (there are no style points awarded)
Understand the security risks and mitigate them accordingly, where it makes sense. Yes, some risks will be there, and the level of effort to correct them outweighs the benefits of the implementation…acknowledge that and build compensating controls around it.
Monitor for weirdness, not just security issues. It’s still early days on what happens with these systems, so keep your head on a swivel while you’re building the bus and the bridge while driving blind. It will probably be alright, I trust you.
