AI as a System (Part 5): Agents - When AI Starts Doing Instead of Answering
The Shift from Tools to Systems
This is Part 5 of the AI as a System series.
See the full series here.
So far, the stack has taken shape in layers. Models generate, tools shape interaction, and CLI tools enable execution by giving AI the ability to take action.
This next layer brings those pieces together: agents.
At this point, AI begins to feel less like something you interact with and more like something that operates on your behalf.
What an Agent Actually Is
An agent is not a specific product or tool. It is a pattern that emerges when you combine reasoning, access, and iteration into a system.
An agent is, at a high level, a system that can take a goal, determine how to achieve it, and take actions until it gets there.
In practice, that shows up as a loop:
goal
↓
plan
↓
act
↓
observe
↓
adjust
↓
repeat
That loop changes the role of the system. Instead of producing a single response, it works toward an outcome over multiple steps.
The Key Difference: Iteration
Earlier tools follow a simple interaction pattern. You provide input, the AI responds, and you decide what to do next.
Agent-based systems operate differently. You provide a goal, and the system begins acting, evaluating results, and adjusting its approach as it moves forward.
This feedback loop allows the system to recover from mistakes, refine its approach, and continue progressing without requiring constant direction at each step.
What Agents Are Built From
Agents do not replace the earlier layers. They depend on them.
An agent is built from a model for reasoning, tools that define what actions are possible, an environment where those actions take place, and a control loop that manages iteration. Each of these pieces contributes something essential. Without reasoning, there is no decision-making. Without tools, there is no capability. Without an environment, there is nothing to act on. Without a loop, there is no progress over time.
Removing any one of these elements breaks the pattern.
What Agents Can Actually Do
When these pieces come together, agents can handle a wide range of tasks. They can generate and modify code, run terminal commands, query APIs, read and write files, execute workflows, debug errors, and even deploy systems.
Agents are often described as AI that can take action. That description is useful, but incomplete. The more important detail is that those actions are coordinated over time in pursuit of a goal.
Real-World Examples
Early versions of this pattern are already showing up in tools like Claude Code, Aider when used iteratively, Replit Agent, and systems like Devin. These tools accept a goal and work toward it across multiple steps, rather than producing a single response.
The behavior feels different because the system is no longer limited to a single interaction.
Why Agents Feel Different
The interaction model changes in a noticeable way. Instead of directing every step, you begin supervising a process as it unfolds.
That shift changes how work gets done. The focus moves from issuing instructions to defining goals and evaluating outcomes.
The Tradeoffs
Agents introduce new capabilities, but they also introduce new challenges.
They are well-suited for multi-step tasks, reduce the need for manual coordination, and can continue working through a problem over time. At the same time, they can take inefficient paths, misinterpret goals, or introduce unintended changes if left unchecked.
For that reason, most real-world implementations still include human review, scoped permissions, and testing as part of the process.
The Beginning of “AI Teammates”
Agents are the first point where AI starts to resemble a role within a system, such as a junior engineer, a project assistant, or an operator.
The similarity comes from how they behave. They can take initiative within a defined scope, iterate toward outcomes, and interact directly with systems. The result is something that feels less like a tool and more like a participant in the workflow.
Why This Matters Long-Term
The introduction of agents changes how systems are designed and how teams operate. Coordination begins to shift away from purely human-driven processes, and some of that responsibility moves into the system itself.
This has implications for development workflows, operations, team structure, and overall system design. The focus moves toward defining boundaries, constraints, and goals, rather than specifying every step in advance.
What’s Next
We’ve now covered models, tools, execution through CLI environments, and agents.
One critical layer remains: how AI connects to your actual data.
The Hidden Layer: How AI Uses Your Data (RAG, Embeddings, and Context)
Without that layer, AI operates from general knowledge. With it, AI can work with the data and systems that matter in real-world use.