Overview

1 Introduction to Search and Optimization

This chapter introduces search and optimization as pervasive ideas that shape behaviors in nature, decisions in daily life, and operations across industries. It positions “search” as the systematic traversal of states and “optimization” as the pursuit of optimal or near‑optimal solutions within feasible constraints, while cautioning that the payoff of sophisticated methods depends on problem scale and implementation costs. To build practical skill, the book moves from small, hand-solvable “toy” problems to real datasets and Python implementations, emphasizing tuning, performance, and scalability. It also distinguishes solution time horizons—design (slow, quality-focused), planning (seconds to minutes), and control (milliseconds to seconds)—highlighting when optimality must be traded for speed.

The chapter formalizes optimization problems via decision variables, objective functions, and constraints, clarifying feasible, optimal, and near‑optimal solutions and the presence of global and local optima in complex landscapes. It discusses duality and simple transformations that preserve the optimizer, contrasts single- with multi-objective problems, and outlines preference weighting versus Pareto approaches for navigating trade-offs. Constraint-satisfaction problems are presented as cases without an explicit objective, while hard versus soft constraints are modeled with penalties that can be made adaptive over time to balance feasibility and solution quality.

Problem structure is then contrasted between well-structured problems—where states, operators, and evaluation criteria are clear and computation is tractable—and ill-structured problems, which feature dynamics, uncertainty, partial observability, and combinatorial explosion. Examples illustrate both ends of the spectrum, including tasks that are theoretically well-posed but practically intractable at real-world scales. Given this landscape, the book emphasizes derivative-free, stochastic, black-box solvers suited to noisy, nonconvex, and partially observable settings. It concludes by framing the central search dilemma—exploration versus exploitation—showing how diversification helps escape local optima while intensification accelerates convergence, and why effective algorithms balance both to deliver high-quality solutions within realistic time budgets.

Book methodology. Each algorithm will be introduced following a structure that goes from explanation to application.
Examples of classical optimization and real-world optimization problems.
Feasible/acceptable solutions fall within the constraints of the problem. Feasible search space may display a combination of global, strong local, and weak local minima.
Equation 1.4
Duality principle and mathematical operations on an optimization problem.
Ticket pricing problem. The optimal pricing which maximizes profits is $155 per ticket.
Electric vehicle design problem for maximizing EPA range and minimizing acceleration time.
The cargo bike loading problem is an example of a problem with a soft constraint. While the weight of the packages can exceed the bike’s capacity, a penalty will be applied when the bike is overweight.
Equation 1.5
Equation 1.6
A well-structured problem will have a defined problem space, operators, and evaluation criteria.
Elevator dispatching problem. With four elevator cars and ten floors, this problem has ~1021 possible states.
Search Dilemma. There is always a tradeoff between branching out to new areas of the search space or focusing on an area with known “good” or elite solutions.

Summary

  • Optimization is ubiquitous and pervasive in numerous areas of life, industry, and research.
  • Decision variables, objective functions and constraints are the main ingredients of optimization problems. Decision variables are the inputs that you have control over and that affect the objective function's value. Objective function is the function that needs to be optimized, either minimized or maximized. Constraints are the limitations or restrictions that the solution must satisfy.
  • Optimization is a search process for finding the “best” solutions to a problem that provide the best objective function values, possibly subject to a given set of hard (must be satisfied) and soft (desirable to satisfy) constraints.
  • Ill-structured problems are complex, discrete, or continuous problems without exact mathematical models and/or algorithmic solutions or general problem solvers. They usually have dynamic and/or partially observable large search spaces that cannot be handled by classical optimization methods.
  • In many real-life applications, quickly finding a near-optimal solution is better than spending a large amount of time in search for an optimal solution.
  • Two key concepts you’ll see frequently in future chapters are exploration/diversification and exploitation/intensification. Handling this search dilemma by achieving a trade-off between exploration and exploitation will allow the algorithm to find optimal or near-optimal solutions without getting trapped in local optima in an attractive region of the search space and without spending large amount of time.

FAQ

What is the difference between “search” and “optimization,” and what is a feasible search space?Search is the systematic examination of states from an initial state toward a goal state. Optimization techniques are search methods whose goal is to find an optimal or near-optimal state. The feasible search space is the subset of the problem space where all constraints are satisfied; optimization searches within this feasible region.
Why should individuals and organizations care about search and optimization?Optimization is ubiquitous—from personal task planning and budgeting to large-scale industrial operations. It drives efficiency in logistics (routing and dispatch), telecommunications (tower placement), transportation networks (ride-hailing), emergency response, airlines (fleets, crews), healthcare (scheduling), Industry 4.0 (smart scheduling, supply chains), and smart cities (energy, water, waste, disaster response). However, applying advanced optimization must be a qualified decision: at small scales (e.g., a local pizzeria), complexity may outweigh benefits; costs, expertise, and stakeholder follow-through should be considered.
How does the book move from toy problems to real-world solutions?The chapter advocates learning with scaled-down, hand-solvable toy problems to understand algorithm steps, then transitioning to real datasets and Python implementations. Each algorithm is presented with its inspiration, pseudocode, parameters, heuristics, pros/cons, and adaptation methods, followed by hands-on examples and exercises to tune performance and assess scalability.
What are the basic ingredients of an optimization problem?Three core components: decision variables (the unknowns you choose), objective function(s) (what you optimize), and constraints (limits on variable values). Solutions are classified as feasible (satisfy constraints), optimal (best objective among feasible), and near-optimal (very good but not best). Landscapes may contain a global optimum and multiple strong/weak local minima.
What is the duality principle, and do simple transformations change the optimizer?Maximization problems can be transformed into equivalent minimization problems (and vice versa) via duality; the argument of the optimum remains the same under simple positive scaling and additive shifts of the objective. Example: a ticket-pricing profit function can be optimized via calculus, yielding an optimal price (e.g., $155 in the chapter’s parameters) without altering the optimizer by multiplying or adding positive constants.
How are multi-objective optimization problems handled, and what is the Pareto frontier?Two common approaches: (1) preference-based scalarization—transform objectives to the same sense (all min or all max) and combine them using weights; weight selection is subjective. (2) Pareto optimization—find a set of non-dominated trade-offs (Pareto frontier) and select using higher-level preferences. Example: EV design balancing minimizing 0–60 acceleration time and maximizing EPA range.
What is a constraint-satisfaction problem (CSP), and how do hard and soft constraints differ?CSPs seek any solution that satisfies constraints without an explicit objective (e.g., N-queens). Hard constraints must be met (e.g., no two classes in the same room at the same time), whereas soft constraints are desirable (e.g., avoiding early classes). Soft constraints are modeled via rewards/penalties in the objective; penalties can be dynamic to encourage early exploration and late-stage feasibility (e.g., cargo-bike loading with overweight penalties).
What defines a well-structured problem (WSP), and what is an example?WSPs have clear criteria for solutions, a well-defined problem space, mechanizable operators, representable knowledge, accurate state-transition models, and solvability with practical computation. Example: a robotic pick-and-place task in a structured, fully observable environment, where optimal motion plans can be modeled and executed reliably.
What defines an ill-structured problem (ISP), and why are stochastic, derivative-free, black-box methods favored?ISPs feature dynamic, partially observable spaces; unclear goals or evaluation; limited models; uncertainty and ambiguity; and computational intractability. Example: multi-elevator dispatch across ten floors yields ~2.88×10^21 states—too large for exhaustive methods. Stochastic, derivative-free, black-box algorithms handle unknown, noisy, or non-differentiable objectives, escape local minima, and scale better to such realities than purely deterministic procedures.
What is the exploration–exploitation (search) dilemma?Exploration diversifies search into new regions to discover better solutions; exploitation intensifies search around known good areas. Too much exploitation risks local minima (e.g., local search), while too much exploration can be slow (e.g., random search). Effective algorithms balance both—often adaptively—to achieve strong solutions within practical time.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Optimization Algorithms ebook for free
choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Optimization Algorithms ebook for free
choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Optimization Algorithms ebook for free