1 The rise of AI agents
This chapter charts the shift from simple LLM chat to tool-using assistants and, ultimately, autonomous agents that can act on a user’s behalf. It clarifies what “agency” means—reasoning, planning, deciding, and acting toward a goal—and distinguishes agents from assistants that pause for user approval at each step. Framing the rest of the book, it focuses on building production-grade, LLM-powered agents that connect to data, APIs, and applications using modern frameworks and patterns so practitioners can progress from prompt tinkering to robust agent architecture.
Agentic thinking is presented as turning goals into actionable task plans and executing them through tools, guided by a Sense → Plan → Act → Learn loop. Assistants can call tools with oversight, whereas agents independently sequence multi-step workflows, requesting feedback only at milestones and operating within guardrails. Tools are defined like functions with clear inputs and outputs, and the Model Context Protocol (MCP) is introduced as a unifying, JSON-RPC–based standard that lets agents discover and use external tool servers consistently. MCP reduces integration friction, normalizes responses, broadens language and platform support, and shifts developer effort from handcrafting tool wrappers to composing with a growing ecosystem of ready-made capabilities.
The chapter organizes agent design into five functional layers: Persona (role and system instructions), Actions & Tools (task execution and support for higher layers), Reasoning & Planning (from single-path to multipath strategies and external planners), Knowledge & Memory (RAG over heterogeneous stores and embeddings), and Evaluation & Feedback (internal learning plus external checks, critics, and guardrails). It then extends to multi-agent systems, outlining three patterns: agent-flow “assembly lines” for linear, role-based handoffs; hub-and-spoke orchestration where a central agent delegates to specialists; and collaborative teams that co-create and critique for the hardest problems, trading efficiency for capability. Together, these concepts establish the foundations for designing, scaling, and governing modern AI agents.
Common patterns for directly communicating with an LLM or an LLM with tools. If you’ve used earlier versions of ChatGPT, you experienced direct interaction with the LLM. No proxy agent or other assistant interjected on your behalf. Today, ChatGPT itself has plenty of tools it uses to help respond from web search, coding and so on, making the current version function like an assistant.
Top: an assistant performs a single or multiple tasks on behalf of a user, where each task requires approval by the user. Bottom: An agent may use multiple tools autonomously without human approval to complete a goal,
The four-step process agents use to complete goals: –Sense (receive input – goal or feedback) -> Plan (define the task list that completes the goal) -> Act (execute tool defined by task) -> Learn (observe the output of the task and determine if goal is complete or process needs to continue) ->
For an agent to use a tool, that tool must first be registered with the agent in the form of a JSON description/definition.l Once the tool is registered, the agent uses that tool in a process not unlike calling a function in Python.
An agent connects to an MCP server to discover the tools it hosts and the description of how to use each tool. When an MCP server is registered with an agent it internally calls list_tools to find all the tools the server supports and their descriptions. Then, like typical tool use internally, it can determine the best way to use those tools based on the respective tool description.
the five functional layers of agents – Persona, Actions & Tools, Reasoning & Planning, Knowledge & Memory, and Evaluation & Feedback
The Persona layer of an agent is the core layer, consisting of the system instructions that define the role of the agent, and how it should complete goals and tasks. It may include how to reason/plan and access knowledge and memory
The role of Actions & Tools within the agent, and how tools can also help power the other agent layers. Tools are a core extension of agents but are also fundamental to the functions used in the upper agent layers
The Reasoning & Planning of agents and how agentic thinking may be augmented. Reasoning may come from many forms, from the underlying model powering the agent, to prompt engineering and even through the use of tools
The Knowledge & Memory layer and how it interacts with and uses the same common forms of storage across both types. Agent knowledge represents information the LLM was not initially trained with but is later augmented. Likewise, memories represent past experiences and interactions of the user, agent or even other systems.
The Evaluation & Feedback layer and the mechanisms used to provide them. From tools which may help evaluate tool use, knowledge retrieval (grounding) and provide feedback, to other agents and workflows that provide similar functionality
The agent flow pattern of assembly with multiple agents. The flow starts with a planning agent that breaks down the goal into a high-level plan that it then passed to the research agent, which may execute the research tasks on the plan and after completing will pass to the content agent, which is responsible for completing the later tasks of the plan, such as writing a paper based on the research
The agent orchestration pattern, often referred to as hub-and-spoke. In this pattern, a central agent asks as the hub or orchestrator to delegate tasks to each of work agents. Worker agents complete their respective tasks and return them to the hub, which determines when the goal is complete and outputs the results.
A team of collaborative agents. The agent collaboration pattern allows agents to interact as peers to allow back and forth communication from one agent to another. In some cases, a manager agent may work as a user proxy and help keep collaborating agents on track
Summary
- An AI agent has agency, the ability to make decisions, undertake tasks, and act autonomously on behalf of someone or something, powered by large language models connected to tools, memory, and planning capabilities.
- An agents agency provides them the ability to process with an autonomous loop called Sense-Plan-Act-Learn process.
- Assistants use tools to perform single tasks with user approval, while agents have the agency to reason, plan, and execute multiple tasks independently to achieve higher-level goals.
- The four patterns we see LLMs being used in include: direct user interaction with LLMs, assistant proxy (reformulating requests), assistant (tool use with approval), and autonomous agent (independent planning and execution).
- Agents receive goals, load instructions, reason out plans, identify required tools, execute steps in sequence, and return results, all while making autonomous decisions.
- Agents use actions, tool functions (extensions that wrap API calls, databases, and external resources) to act beyond their code base and interact with external systems.
- Model Context Protocol (MCP), developed by Anthropic in November 2024, serves as the "USB-C for LLMs," providing a standardized protocol that allows agents to connect to MCP servers, discover available tools, and use them seamlessly without custom integration code.
- MCP addresses inconsistent tool access, unreliable data responses, fragmented integrations, code extensibility limitations, implementation complexity, and provides easy-to-build standardized servers.
- AI Agent development can be expressed in terms of five functional layers: Persona, Tools & Actions, Reasoning & Planning, Knowledge & Memory, and Evaluation & Feedback.
- The Persona layer represents the core role/personality and instructions an agent will use to undertake goal and task completion.
- The Tools & Actions layer provides the agent with the functionality to interact and manipulate the external world.
- The Reasoning & Planning layer enhances an agent's ability to reason and plan through complex goals that may require trial-and-error iteration.
- The Knowledge & Feedback layer represents external sources of information that can augment the agent’s context with external knowledge or relate past experiences (memories) of previous interactions.
- The Evaluation & Feedback layer represent external agent mechanisms that can assist in improved response accuracy, encourage goal/task learning and increased confidence in overall agent output.
- Multi-agent systems include patterns such as Agent-flow assembly lines (sequential specialized workers), agent orchestration hub-and-spoke (central coordinator with specialized workers), and agent collaboration teams (agents communicating and working together with defined roles).
- The Agent-Flow pattern (sequential assembly line) is the most straightforward multi-agent implementation where specialized agents work sequentially like an assembly line, ideal for well-defined multi-step tasks with designated roles.
- The Agent Orchestration pattern is a hub-and-spoke model where a primary agent plans and coordinates with specialized worker agents, transforming single-agent tool use into multi-agent delegation.
- The Agent Collaboration pattern represents agents in a team-based approach. Agents communicate with each other, provide feedback and criticism, and can solve complex problems through collective intelligence, though with higher computational costs and latency.
- AI agents represent a fundamental shift from traditional programming to natural language-based interfaces, enabling complex workflow automation from prompt engineering to production-ready agent architecture.
FAQ
What is an AI agent, and how does it differ from a classic AI assistant or chatbot?
An AI agent has agency: it can reason, plan, make decisions, and execute tasks autonomously on a user’s behalf. An assistant (like a tool-using chatbot) can call tools but typically requires user approval for each step and does not independently complete multi-step goals.How have LLM interaction patterns evolved from direct chats to assistants and then to agents?
Early use involved direct LLM conversations. Next came assistants: LLMs augmented with tool use (e.g., web search, image generation) that ask for approval before calling tools. Agents are the latest step: they autonomously chain tools and decisions to achieve higher-level goals without requiring approval for each action.What does “agentic thinking” mean, and what is the Sense → Plan → Act → Learn loop?
Agentic thinking is an agent’s internal process for turning goals into actions. The loop is: Sense (receive a goal or feedback) → Plan (break the goal into tasks) → Act (invoke tools/actions) → Learn (evaluate results, adjust, and continue until the goal is met).How do agents use tools in practice?
Tools are function-like capabilities described to the agent (often via JSON). Once registered, the agent chooses when and how to call them, passes inputs, receives outputs, and can chain tool calls. Tools typically wrap APIs, databases, external apps, or services.What is the Model Context Protocol (MCP), and why does it matter?
MCP is an open standard (from Anthropic, based on JSON-RPC 2.0) that lets agents and LLMs connect to external services consistently, securely, and efficiently. It standardizes how tools are described and accessed, enabling plug-and-play integration across codebases and languages—often called the “USB-C for LLMs and agents.”How does an agent discover and use tools exposed by an MCP server?
The agent registers the MCP server, calls list_tools to discover available tools and their descriptions, selects the appropriate tools for the goal, executes them, and iterates using its Sense → Plan → Act → Learn loop.What developer pain points does MCP address?
- Inconsistent tool access across models
- Unreliable/heterogeneous response formats
- Fragmented, duplicated integration code
- Limited language extensibility (now any language can implement tools)
- High implementation overhead (many reusable servers exist)
AI Agents in Action, Second Edition ebook for free