Overview

1 The rise of AI agents

AI agents are presented as entities with genuine agency: they can make decisions, plan, and act on a user’s behalf to achieve goals, going beyond classic assistants or chatbots that await step-by-step approval. The chapter clarifies the evolution from direct LLM interactions and tool-using assistants to fully agentic systems, and sets the book’s aim: to help readers build practical, production-ready agents powered by modern LLMs, moving from prompt tinkering to architecting robust agent workflows connected to real tools, data, and applications.

Agentic behavior is framed around turning goals into tasks and executing them via tools within a Sense–Plan–Act–Learn loop. Assistants may call tools but typically require user approval for each step; agents reason, plan, and act autonomously, requesting feedback only at milestones. Tools are defined and registered so agents can invoke them like functions, often wrapping APIs, databases, or external apps. The Model Context Protocol (MCP)—an open JSON-RPC–based standard—streamlines this by letting agents discover and use toolsets hosted on MCP servers, standardizing interfaces and responses across providers and languages. This “USB‑C for LLMs” addresses inconsistent tool formats, fragmented integrations, and extensibility, making it far easier to compose rich agent capabilities without bespoke glue code.

The chapter also outlines the foundational layers that shape capable agents: Persona (system role and behavior), Actions & Tools (the operational capabilities), Reasoning & Planning (controlling how agents think, from single-path to multipath strategies and external planners), Knowledge & Memory (RAG-driven retrieval from documents, databases, graphs, and embeddings), and Evaluation & Feedback (guardrails, critics, and workflows that harden outputs). It then introduces multi-agent patterns: agent-flow assembly lines for well-defined pipelines, hub-and-spoke orchestrations where a central agent delegates to specialists, and collaborative teams that exchange critiques to tackle complex problems—each with trade-offs in control, efficiency, and cost. Together, these concepts provide a blueprint for designing single- and multi-agent systems with modern frameworks and standardized tool integrations.

Common patterns for directly communicating with an LLM or an LLM with tools. If you’ve used earlier versions of ChatGPT, you experienced direct interaction with the LLM. No proxy agent or other assistant interjected on your behalf. Today, ChatGPT itself has plenty of tools it uses to help respond from web search, coding and so on, making the current version function like an assistant.
Top: an assistant performs a single or multiple tasks on behalf of a user, where each task requires approval by the user. Bottom: An agent may use multiple tools autonomously without human approval to complete a goal,
The four-step process agents use to complete goals: –Sense (receive input – goal or feedback) -> Plan (define the task list that completes the goal) -> Act (execute tool defined by task) -> Learn (observe the output of the task and determine if goal is complete or process needs to continue) ->
For an agent to use a tool, that tool must first be registered with the agent in the form of a JSON description/definition.l Once the tool is registered, the agent uses that tool in a process not unlike calling a function in Python.
An agent connects to an MCP server to discover the tools it hosts and the description of how to use each tool. When an MCP server is registered with an agent it internally calls list_tools to find all the tools the server supports and their descriptions. Then, like typical tool use internally, it can determine the best way to use those tools based on the respective tool description.
the five functional layers of agents – Persona, Actions & Tools, Reasoning & Planning, Knowledge & Memory, and Evaluation & Feedback
The Persona layer of an agent is the core layer, consisting of the system instructions that define the role of the agent, and how it should complete goals and tasks. It may include how to reason/plan and access knowledge and memory
The role of Actions & Tools within the agent, and how tools can also help power the other agent layers. Tools are a core extension of agents but are also fundamental to the functions used in the upper agent layers
The Reasoning & Planning of agents and how agentic thinking may be augmented. Reasoning may come from many forms, from the underlying model powering the agent, to prompt engineering and even through the use of tools
The Knowledge & Memory layer and how it interacts with and uses the same common forms of storage across both types. Agent knowledge represents information the LLM was not initially trained with but is later augmented. Likewise, memories represent past experiences and interactions of the user, agent or even other systems.
The Evaluation & Feedback layer and the mechanisms used to provide them. From tools which may help evaluate tool use, knowledge retrieval (grounding) and provide feedback, to other agents and workflows that provide similar functionality
The agent flow pattern of assembly with multiple agents. The flow starts with a planning agent that breaks down the goal into a high-level plan that it then passed to the research agent, which may execute the research tasks on the plan and after completing will pass to the content agent, which is responsible for completing the later tasks of the plan, such as writing a paper based on the research
The agent orchestration pattern, often referred to as hub-and-spoke. In this pattern, a central agent asks as the hub or orchestrator to delegate tasks to each of work agents. Worker agents complete their respective tasks and return them to the hub, which determines when the goal is complete and outputs the results.
A team of collaborative agents. The agent collaboration pattern allows agents to interact as peers to allow back and forth communication from one agent to another. In some cases, a manager agent may work as a user proxy and help keep collaborating agents on track

Summary

  • An AI agent has agency, the ability to make decisions, undertake tasks, and act autonomously on behalf of someone or something, powered by large language models connected to tools, memory, and planning capabilities.
  • An agents agency provides them the ability to process with an autonomous loop called Sense-Plan-Act-Learn process.
  • Assistants use tools to perform single tasks with user approval, while agents have the agency to reason, plan, and execute multiple tasks independently to achieve higher-level goals.
  • The four patterns we see LLMs being used in include: direct user interaction with LLMs, assistant proxy (reformulating requests), assistant (tool use with approval), and autonomous agent (independent planning and execution).
  • Agents receive goals, load instructions, reason out plans, identify required tools, execute steps in sequence, and return results, all while making autonomous decisions.
  • Agents use actions, tool functions (extensions that wrap API calls, databases, and external resources) to act beyond their code base and interact with external systems.
  • Model Context Protocol (MCP), developed by Anthropic in November 2024, serves as the "USB-C for LLMs," providing a standardized protocol that allows agents to connect to MCP servers, discover available tools, and use them seamlessly without custom integration code.
  • MCP addresses inconsistent tool access, unreliable data responses, fragmented integrations, code extensibility limitations, implementation complexity, and provides easy-to-build standardized servers.
  • AI Agent development can be expressed in terms of five functional layers: Persona, Tools & Actions, Reasoning & Planning, Knowledge & Memory, and Evaluation & Feedback.
  • The Persona layer represents the core role/personality and instructions an agent will use to undertake goal and task completion.
  • The Tools & Actions layer provides the agent with the functionality to interact and manipulate the external world.
  • The Reasoning & Planning layer enhances an agent's ability to reason and plan through complex goals that may require trial-and-error iteration.
  • The Knowledge & Feedback layer represents external sources of information that can augment the agent’s context with external knowledge or relate past experiences (memories) of previous interactions.
  • The Evaluation & Feedback layer represent external agent mechanisms that can assist in improved response accuracy, encourage goal/task learning and increased confidence in overall agent output.
  • Multi-agent systems include patterns such as Agent-flow assembly lines (sequential specialized workers), agent orchestration hub-and-spoke (central coordinator with specialized workers), and agent collaboration teams (agents communicating and working together with defined roles).
  • The Agent-Flow pattern (sequential assembly line) is the most straightforward multi-agent implementation where specialized agents work sequentially like an assembly line, ideal for well-defined multi-step tasks with designated roles.
  • The Agent Orchestration pattern is a hub-and-spoke model where a primary agent plans and coordinates with specialized worker agents, transforming single-agent tool use into multi-agent delegation.
  • The Agent Collaboration pattern represents agents in a team-based approach. Agents communicate with each other, provide feedback and criticism, and can solve complex problems through collective intelligence, though with higher computational costs and latency.
  • AI agents represent a fundamental shift from traditional programming to natural language-based interfaces, enabling complex workflow automation from prompt engineering to production-ready agent architecture.

FAQ

What is an AI agent, and how is it different from an assistant or basic LLM chatbot?An AI agent has agency: it can make decisions, plan, and act on your behalf to complete multi-step goals. Assistants (tool-using LLMs) can call tools but typically require user approval for each action, whereas agents can autonomously choose and execute multiple tool calls to achieve a goal. A basic LLM chatbot simply replies to prompts without tool use or autonomous planning.
How do agents turn a goal into tasks and tools?Agents decompose a user goal into a sequence of tasks, each mapped to a tool function. Examples: - Create an image → task: create image → tool: create_image - Travel to Calgary → tasks: search flights, book flights, book hotels, book transportation → tools: search_flights, book_flights, book_hotels, book_transportation - Buy a computer → tasks: search, compare, order → tools: web_search, web_search, order Tools can be very specific (create_image) or more general (web_search) and reused across tasks.
What is the Sense → Plan → Act → Learn loop in agents?It’s the internal cycle agents use to reach goals: - Sense: receive a goal or feedback - Plan: break the goal into a task list and choose tools - Act: execute the selected tool(s) - Learn: evaluate results, decide if the goal is met, or iterate with revised plans
How do agents use tools in practice?Tools are function-like capabilities the agent can call. They are: - Registered with the agent via a JSON definition (inputs/outputs and description) - Often wrappers over APIs, databases, or external apps - Invoked by the agent much like calling a function Frameworks let you register tools (e.g., via decorators). Historically, tools had to live in the agent’s codebase; modern protocols (like MCP) let agents use tools hosted externally.
What is the Model Context Protocol (MCP) and why does it matter?MCP (from Anthropic, based on JSON-RPC 2.0) is an open standard that lets agents/LLMs connect to external services consistently and securely—often described as “USB‑C for LLMs.” It solves common issues: - Inconsistent tool access across LLMs - Unreliable/variable response formats - Fragmented, ad hoc integrations - Limited code extensibility (locked to a language/runtime) - Complexity of tool implementation - Difficulty building/packaging tools (MCP servers make it easy)
How does an agent discover and invoke tools via MCP?Typical flow: - Run and register an MCP server with the agent - The agent calls list_tools to discover available tools and their descriptions - The agent selects an appropriate tool, executes it, observes results, and adapts its plan as needed This removes the need to hand-code most tool integrations inside the agent.
What are the five functional layers of an agent?The chapter presents five layers: - Persona: the system instructions/role defining behavior - Actions & Tools: capabilities the agent can execute - Reasoning & Planning: how the agent thinks and sequences tasks - Knowledge & Memory: retrieving and storing context beyond the base model - Evaluation & Feedback: external checks, guardrails, and critiques The core layers most agents need are Persona, Actions & Tools, and Reasoning & Planning.
What is an agent Persona and how do you create one?The Persona is the base role and guidance for the agent (its “system prompt”). It can include background, role (e.g., coder, writer), tone, and how to reason/plan or access memory. You can create personas by: - Handcrafting - Using an LLM to assist - Data-driven methods, including evolutionary techniques
How do agents handle knowledge and memory?Agents use retrieval-augmented generation (RAG) to fetch relevant knowledge and memories that augment context while conserving tokens. Structures can be: - Unified or hybrid (mixing retrieval forms) - Backed by PDFs, relational/object/document databases, graph stores, keyword search - Powered by dense embeddings and semantic similarity search (vectors) Memories can also be simple lists capturing past interactions or experiences.
What multi-agent patterns does the chapter introduce, and when should I use them?Three patterns: - Agent-flow (assembly line): sequential handoff between specialized agents. Easiest to implement/control; ideal for well-defined multi-step tasks. - Orchestration (hub-and-spoke): a central orchestrator delegates work to specialist agents. Keeps I/O centralized and scales from single-agent setups; worker feedback is more constrained. - Collaboration (teams of agents): peer agents communicate and critique each other. Best for very complex, open-ended problems and idea generation; can be chatty, repetitive, and costlier. You can also mix patterns as needed.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • AI Agents in Action, Second Edition ebook for free
choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • AI Agents in Action, Second Edition ebook for free