1 Understanding agentic applications
This chapter introduces agentic applications as systems that wrap large language models with tools, memory, and autonomous reasoning so they can pursue goals through multi-step loops rather than single prompt-response exchanges. It distinguishes fully autonomous AI agents—where the model directs its own reasoning and tool use—from agentic workflows, where developers define a controllable graph of steps with optional dynamic routing. The motivation is pragmatic: real work spans judgment-heavy, multi-stage processes, and well-designed agentic systems can compress hours of manual coordination into minutes, while poorly designed ones waste tokens and erode trust. The book adopts CrewAI to teach both the practical primitives and the design mindset needed to build these systems.
The core building block is the augmented LLM: retrieval for external knowledge, tools for acting in the world, and memory for state. Function calling lets models request tool executions in structured form, turning text predictors into capable operators. The chapter surveys proven design patterns—prompt chaining with validation gates, routing to specialized handlers, parallelization via sectioning and voting, orchestrator‑workers for dynamic delegation, and evaluator‑optimizer loops for reflection—and advises choosing the simplest combination that meets requirements, usually starting with workflows and adding autonomy selectively. CrewAI’s primitives map cleanly to these patterns: agents (defined by role, goal, backstory), tasks with clear success criteria and structured outputs, tools with focused purposes, plus crews and flows that combine controlled logic with pockets of agentic reasoning. Emphasis is placed on small, focused agents with limited tool sets and on investing heavily in task design.
Shipping agentic applications is hard: errors compound across steps, token costs rise quickly, evaluation is nondeterministic and must assess intermediates, human oversight is essential for impactful actions, and long contexts degrade model performance, making context management a first-class concern. The recommended progression is incremental—single call, then augmented LLM, then workflow, and only then multi‑agent—while monitoring reliability, retries, and cost and choosing model tiers appropriately. The chapter closes by positioning CrewAI’s crews and flows as practical orchestration mechanisms and by introducing the Model Context Protocol, which CrewAI can both consume and expose, enabling standardized connections to external capabilities and broader interoperability.
At the heart of each AI agent is the agent loop, in which the LLM reasons, calls tools at will, and decides when to stop.
In an agentic workflow, the system follows code paths that were pre-defined by the developer. Those steps can include arbitrary logic expressed in code, LLM calls, as well as dynamic routing based on the results of a previous step.
The application sends the user's prompt along with tool definitions to the LLM. Instead of answering directly, the LLM returns a structured tool call. The application + executes the tool and sends the result back, and the LLM produces its final response. The LLM never executes tools itself — it only requests them.
Each agent is initialized with a role, goal, and a compelling backstory.
The documentation writer agent has a well-defined role, a clearly laid out goal, and a compelling backstory. It has access to three tools: one to search the web for a specific query, one to summarize a snippet of text, and one to format any given text as markdown to create a nice document.
A crew is like a cross-functional team that works on a given set of tasks until they are completed.
An example flow that generates a book based on a topic that is given as input and returns a link to a generated PDF in the end. It contains two crews that are being executed in two different steps of the workflow, a shared state, parallel execution, and dynamic routing based on the output of a crew.
Summary
- Agentic AI turns a model that predicts text into a system that gets work done. An AI agent reasons, calls tools, observes results, and loops until the task is complete. Agentic workflows follow developer-defined code paths and trade flexibility for predictability.
- The augmented LLM is the atomic building block of every agentic system: a language model enhanced with retrieval, tools, and memory.
- Five recurring design patterns cover most agentic architectures: prompt chaining, routing, parallelization, orchestrator-workers, and evaluator-optimizer. They sit on a spectrum from simple to complex, and real applications often combine several.
- Production agentic systems face compounding errors across steps, high token costs, non-deterministic evaluation, the need for human oversight, and context degradation over long interactions.
- CrewAI organizes agentic systems around agents (defined by role, goal, and backstory), tasks, tools, crews, and flows. Crews handle multi-agent collaboration, while flows give explicit control over execution order and logic. MCP connects agents to external capabilities through a standardized protocol.
Building Agentic Applications with CrewAI and MCP ebook for free