Overview

1 Understanding data-oriented design

Modern game development must deliver complex, responsive experiences under tight performance and production constraints. This chapter introduces Data-Oriented Design (DOD) as a data-first way of thinking that helps meet those demands across AAA, indie, and mobile targets. By focusing on the data a feature needs and how it flows through the program, DOD promises three core benefits over traditional object-oriented approaches: faster execution, lower code complexity, and better long-term extensibility. Rather than being premature optimization, DOD encourages writing code that is naturally efficient from the start, helping teams sustain stable performance as features and content grow.

The chapter explains why performance hinges on memory behavior more than raw CPU speed. CPUs are extremely fast, but waiting on main memory is slow; leveraging the CPU cache is therefore critical. DOD improves cache efficiency through data locality and predictable access patterns: organize related values contiguously, think in terms of cache lines, and aim for cache hits instead of misses. Practically, this often means replacing per-object methods with batch functions that operate over arrays of attributes (positions, directions, velocities), enabling linear iteration and fewer trips to memory. By aligning data with how it is processed, the hardware’s cache prediction works in your favor, yielding substantial real-world speedups.

Beyond performance, DOD reduces architectural complexity by separating data from logic and expressing behavior as stateless transformations from input data to output data. This avoids brittle inheritance hierarchies and makes features easier to add or change: each new mechanic is framed as “what data is needed, and how is it transformed,” keeping costs roughly linear over time. The chapter also clarifies how Entity Component System (ECS) can support DOD by organizing data and systems, while emphasizing that DOD itself is pattern-agnostic and should remain flexible. In sum, the chapter sets the foundation for building faster, simpler, and more extensible game code by consistently solving for data first.

Screenshot from our imaginary survival game, with the player in the middle, and enemies moving around.
Our Enemy object holds both the data and the logic in a single place. The data is the position, direction, and velocity. The logic is the Move() method that moves this enemy around.
On the motherboard, the memory sits apart from the CPU, regardless if it’s in a console, desktop and mobile device. That physical distance, combined with the size of the memory, makes it relatively slow to retrieve data from memory.
The cache sits directly on the CPU die and is physically small. Retrieving data from the cache is significantly faster than retrieving data from main memory.
A single-core CPU with an L1 cache directly on the CPU die.
A 2-core CPU with shared L3 cache
Flowchart showing how the CPU accesses data in a system with three cache levels. If the data is not found in the L1 cache, we look for it in L2. If it is not in L2, we look in L3. If it is not in L3, we need to retrieve it from main memory. The further we have to go to find our data, the longer it takes.
Data is retrieved from main memory in chunks called cache lines. When we ask for data from main memory, the memory manager retrieves the data we need, plus the chunk of data that comes directly after it, and copies the entire chunk to the cache.
When retrieving a cache line from main memory, it is copied to all levels of the cache. In this example it is first copied to L3, then L2 and finally L1. The cache line is the same size at all levels - meaning the same amount of data is copied to every level. L3 can simply hold more cache lines than L2, and L2 can hold more cache lines than L1.
How the member variables of our Enemy object are placed in memory. The position data is placed first, then direction, then velocity. The same order they are defined in the Enemy class. They are packed together in memory without any space between them.
Our cache line will include m_position, m_direction, m_velocity, and whatever data comes right after them. Our cache line is 64 bytes. The variables m_position and m_direction are of type Vector2, which takes 8 bytes. The variable m_velocity is a float, which takes 4 bytes. That means we have 44 bytes leftover, which are automatically filled with whatever data comes after m_velocity.
When our CPU asks for m_position, the Memory Management Unit (MMU) will try to fill the cache line from the nearest address that is aligned with the size of our cache line. If our cache lines are 64-byte long, the cache line will be filled with data from the nearest 64-byte aligned address. In this case, m_position sits at 0x4C and the nearest 64-byte aligned address will be 0x40.
If the data we need does not align with the cache line size, it will need to be split into two cache lines instead of one.
We can see both Move() and TrackPlayer() require the same variables, Enemy Position and Direction, but each one also needs different data as well, Enemy Velocity for Move() and Player Position for TrackPlayer(). When data is shared between different logic functions it makes it impossible to guarantee data locality for every logic function.
Arrays automatically place their data in contiguous memory. All the position array data will be in a single contiguous chunk of memory, as will direction and velocity’s data.
We can see how the position array sits in memory, and how the array elements 0 to 7 all fit in a single 64 byte cache line.
The two existing enemies in our game, the Angry Cactus, which is a static enemy, and the Zombie, which is a moving enemy.
Task to implement a new enemy, the Teleporting Robot.
Our game’s enemy inheritance tree, with EnemyTeleportOnHit inheriting from EnemyMove.
Every function in our game takes in some input data, then transforms it into output data.
The Move() function’s input is the enemy position, direction, and velocity. The transformation is our calculation of the new position. The output is the new position.
To make our enemy track the player, we just add a function that sets the enemy’s direction toward the player. Our input is the enemy position and the player position. The transformation is calculating a new direction for the enemy. The output is the new direction.
To add our new Robot Zombie, we just add a function that teleports the player to a new location if it is hit. Our input is the damage the enemy received, if any, and whether it should teleport if hit. The transformation is calculating a new position if the enemy is hit. The output is either the new position if hit, or the old position if the enemy is not hit.
To show an enemy in the correct position, we pass in the enemy’s GameObject and its position. We transform our data by assigning the GameObject’s position to the enemy. The output is Unity rendering our GameObject in the correct position.
Task to implement a new enemy, the Zombie Duck.
To determine what velocity we should set our enemy, we are going to take in four variables: the enemy position, the player position, the distance we need to check against, and the new enemy velocity. Our logic will calculate the distance between the player and the enemy and check it against the input distance. The output is the new velocity for the enemy based on the logic result.
With OOP, in an ideal situation, we start the project by spending time setting up systems and inheritance hierarchies so future features will be quick and easy to implement.
With OOP, what usually happens is that the more features we already have, the longer it takes to add a new feature. For every new feature, we need to take into account the complicated relationship between existing features.
With DOD the time to add a new feature is linear because we don’t need to take into account the existing features. All we need is the data for the feature, and what logic we need to transform the data.

Summary

  • With Data-Oriented Design we get a performance boost by structuring our data to take advantage of the CPU cache.
  • Your target CPU may have multiple levels of cache, but the first level, called the L1 cache is the fastest.
  • The L1 cache is the fastest because it is small and is placed directly on the CPU die.
  • Retrieving data from L1 cache is up to 50 times faster than accessing main memory.
  • To avoid having to retrieve data from main memory, our CPU uses cache prediction to guess which data we are going to need next and places it in the cache ahead of time.
  • Data is pulled from memory into the cache in chunks called cache lines.
  • Practicing data locality by keeping our data close together in memory helps the CPU cache prediction retrieve the data we’ll need in the future into the L1 cache.
  • Placing our data in arrays makes it easy to practice data locality.
  • With Data-Oriented Design we can reduce our code complexity by separating the data and the logic.
  • Every function in our game takes input and transforms it into the output needed. The output can be anything from how many coins we have to showing enemies on the screen.
  • Instead of thinking about objects and their relationships, we only think about what data our logic needs for input and what data our logic needs to output.
  • With Data-Oriented Design, we can also improve our game's extensibility by always solving problems through data. This makes it easy to add new features and modify existing ones.
  • Regardless of how complex our game has become, every new feature can be solved using data. This allows for near-constant development time regardless of how complex our game has become and makes it easy to add complicated new features.
  • ECS is a design pattern sometimes used to implement DOD. Not all ECS implementations are DOD, and we don’t need ECS to implement DOD.

FAQ

What is Data-Oriented Design (DOD)?

DOD is a data-first way of writing game code. You separate data from logic, organize data in memory to suit how it will be processed, and write functions that transform input data into output. The result is better performance, simpler code, and easier extensibility than typical object-oriented approaches.

Why does DOD improve performance on modern CPUs?

Modern CPUs are extremely fast at computation but comparatively slow at fetching data from main memory. DOD groups the data that logic needs next so it can be served from the CPU’s cache instead of main memory, drastically reducing stalls and making code run much faster.

What are CPU cache hits and misses, and why do they matter?

A cache hit occurs when requested data is already in the CPU cache (fast); a cache miss occurs when data must be fetched from a lower, slower memory level (slow). DOD increases cache hits and reduces misses by arranging data contiguously and accessing it in predictable patterns.

What is a cache line and how does its size/alignment affect my data?

A cache line is the fixed-size chunk of memory moved into cache on each fetch (commonly 64 bytes, varies by CPU). If all data needed for a calculation fits within a cache line, you often get it “for free” with one fetch. Because fetches are aligned to cache-line boundaries, data that straddles two lines can cause extra memory traffic.

What is data locality and how can I achieve it?

Data locality means placing data that is used together close together in memory. In practice, favor arrays of plain data (contiguous memory) and process them in tight loops. For example, keep positions, directions, and velocities in separate parallel arrays and update all entities in a single pass to maximize sequential access.

How does separating data and logic reduce code complexity?

Instead of modeling relationships between objects and spreading behavior across deep class hierarchies, DOD writes small, focused functions that take explicit inputs and produce outputs. You reason about “data in → transform → data out,” which is easier to understand, test, and change.

How does DOD improve extensibility as a project grows?

New features boil down to identifying the data they need and the transformations to apply. Because features don’t have to fit into a fragile inheritance tree, the cost to add the 100th feature is similar to the first—development scales more linearly instead of slowing down over time.

Isn’t DOD just premature optimization?

No. DOD is about writing code that is naturally cache-friendly and batchable from day one, not micro-optimizing hot spots later. It avoids costly refactors when performance problems appear late, especially on constrained hardware or in content-heavy games.

What is ECS and how does it relate to DOD?

Entity Component System (ECS) is a design pattern often used to implement DOD. Entities identify items, components hold data, and systems operate on sets of components. ECS makes contiguous data layouts and batch processing easier, but DOD does not require ECS—it's a way of thinking about data and access patterns.

What practical pitfalls should I watch for when organizing data?

- Avoid mixing lots of unrelated fields in one object; it dilutes locality and cache efficiency.
- Watch for data that straddles cache-line boundaries; small layout changes can reduce extra fetches.
- Be careful when the same data is used by multiple functions; prefer passing arrays to batch processes instead of per-object methods.
- Don’t over-focus on member order; if the working set fits in a line, order matters less than contiguity and predictable access.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Data-Oriented Design for Games ebook for free
choose your plan

team

monthly
annual
$49.99
$399.99
only $33.33 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • Data-Oriented Design for Games ebook for free