← Back to Field Notes
Part 1 of the AI Systems series

AI as a System (Part 1): AI is a Stack

A clear mental model for understanding how modern AI actually works

This is Part 1 of the AI as a System series.
See the full series here.

Unless you’ve been keeping up with AI every day for the past few years, trying to move beyond ChatGPT can feel overwhelming.

There’s ChatGPT, Cursor, GitHub Copilot, Grok, Claude, and more. Every tool claims to be “AI-powered,” but they all behave differently. Some answer questions, help plan vacations, or rewrite emails so you sound less annoyed at your coworker. Others autocomplete code or generate entire applications.

So what’s actually different?

AI is Not a Single Product

Most people think of AI as a single product, like ChatGPT. That perspective is similar to thinking of Chrome as the internet. It captures the interface, but not the system behind it.

AI isn’t a single thing. It’s a stack of systems working together.

The AI Stack

At a high level, the AI stack consists of three layers: the model, the tool, and the agent. Each layer plays a different role, and understanding those roles makes the whole system easier to reason about.

Layer 1: Models (The Brain)

At the foundation are the models.

Models are the actual AI systems doing the work.

Examples include GPT from OpenAI, Claude from Anthropic, and Gemini from Google.

Models do not think in the way people do, even though that language is often used. What they actually do is predict the next word or phrase based on patterns learned from large datasets using neural networks. In practical terms, AI is highly advanced pattern prediction.

Layer 2: Tools (How You Interact With It)

You don’t interact with models directly. You interact with tools that wrap them.

ChatGPT provides a conversational interface. Cursor integrates into an IDE with access to your files. GitHub Copilot focuses on inline code suggestions.

These tools are built on similar underlying models, but they feel different because they control how much context the model sees, what actions it can take, and how results are applied.

That is why ChatGPT answers questions, Cursor edits your codebase, and Copilot behaves more like autocomplete.

Layer 3: Agents (The Behavior)

Agents are what happen when you move from asking AI for responses to asking it to complete tasks.

An agent takes a goal, breaks it into steps, uses available tools, evaluates the results, and continues iterating until the objective is reached.

This is what enables AI to refactor a codebase, run terminal commands, or build an application from a prompt.

Why Everything Feels So Inconsistent

If AI has felt inconsistent or unpredictable, it is usually because you are not interacting with a single system.

You are interacting with different models, through different tools, with varying levels of autonomy.

Changing any one of those layers changes the experience.

A More Accurate Way to Think About AI

Instead of asking which AI to use, it helps to think in terms of the system.

Consider which model fits the task, which tool provides the right environment, and how much autonomy the system should have.

Systems Thinking Insight

If you come from a cloud, DevOps, or systems background, this structure should feel familiar.

AI systems are beginning to resemble distributed systems, service layers, and orchestration pipelines. The difference is that instead of services calling APIs directly, models are reasoning about those interactions.

What’s Next

Now that the structure of the stack is clear, the next step is to look at each layer in more detail.