Overview

1 Understanding data-oriented design

Game development must deliver responsive, content-rich experiences across constrained hardware and tight schedules, which makes performance and iteration speed critical. Data-Oriented Design (DOD) tackles these demands with a data-first mindset: organize data for the way modern CPUs actually work, separate data from logic, and treat every feature as “inputs → transform → outputs.” This approach is not about premature optimization; it is about writing code that is naturally fast, easier to read, and straightforward to extend, so teams can ship features faster without accumulating fragile class hierarchies or tangled dependencies.

The chapter explains why CPUs run fast only when the data they need is nearby: accessing the L1 cache is far quicker than going to main memory. DOD leverages cache behavior—cache lines, hits and misses, and prediction—by maximizing data locality. Instead of bundling many unrelated fields inside objects, DOD groups like data contiguously (for example, keeping all positions, all directions, and all velocities in separate arrays) and processes them in tight loops. This arrangement improves cache efficiency, reduces memory stalls, and enables significant speedups when operating over large sets, such as moving thousands of enemies per frame.

Beyond raw speed, DOD reduces complexity by decoupling logic from data: functions take specific inputs, perform a clear transformation, and produce outputs. New features become a matter of identifying the data they need rather than navigating deep inheritance trees, which keeps development time closer to linear as games grow. The chapter also clarifies how Entity Component System (ECS) can implement DOD by mapping entities to indices, components to data, and systems to logic—but emphasizes that DOD does not require any particular pattern. Ultimately, combining data locality, data/logic separation, and data-first problem solving yields Unity games that are faster, easier to reason about, and more extensible over long lifecycles.

Screenshot from our imaginary survival game, with the player in the middle, and enemies moving around.
Our Enemy object holds both the data and the logic in a single place. The data is the position, direction, and velocity. The logic is the Move() method that moves this enemy around.
On the motherboard, the memory sits apart from the CPU, regardless if it’s in a console, desktop and mobile device. That physical distance, combined with the size of the memory, makes it relatively slow to retrieve data from memory.
The cache sits directly on the CPU die and is physically small. Retrieving data from the cache is significantly faster than retrieving data from main memory.
A single-core CPU with an L1 cache directly on the CPU die.
A 2-core CPU with shared L3 cache
Flowchart showing how the CPU accesses data in a system with three cache levels. If the data is not found in the L1 cache, we look for it in L2. If it is not in L2, we look in L3. If it is not in L3, we need to retrieve it from main memory. The further we have to go to find our data, the longer it takes.
Data is retrieved from main memory in chunks called cache lines. When we ask for data from main memory, the memory manager retrieves the data we need, plus the chunk of data that comes directly after it, and copies the entire chunk to the cache.
When retrieving a cache line from main memory, it is copied to all levels of the cache. In this example it is first copied to L3, then L2 and finally L1. The cache line is the same size at all levels - meaning the same amount of data is copied to every level. L3 can simply hold more cache lines than L2, and L2 can hold more cache lines than L1.
How the member variables of our Enemy object are placed in memory. The position data is placed first, then direction, then velocity. The same order they are defined in the Enemy class. They are packed together in memory without any space between them.
Our cache line will include m_position, m_direction, m_velocity, and whatever data comes right after them. Our cache line is 64 bytes. The variables m_position and m_direction are of type Vector2, which takes 8 bytes. The variable m_velocity is a float, which takes 4 bytes. That means we have 44 bytes leftover, which are automatically filled with whatever data comes after m_velocity.
When our CPU asks for m_position, the Memory Management Unit (MMU) will try to fill the cache line from the nearest address that is aligned with the size of our cache line. If our cache lines are 64-byte long, the cache line will be filled with data from the nearest 64-byte aligned address. In this case, m_position sits at 0x4C and the nearest 64-byte aligned address will be 0x40.
If the data we need does not align with the cache line size, it will need to be split into two cache lines instead of one.
We can see both Move() and TrackPlayer() require the same variables, Enemy Position and Direction, but each one also needs different data as well, Enemy Velocity for Move() and Player Position for TrackPlayer(). When data is shared between different logic functions it makes it impossible to guarantee data locality for every logic function.
Arrays automatically place their data in contiguous memory. All the position array data will be in a single contiguous chunk of memory, as will direction and velocity’s data.
We can see how the position array sits in memory, and how the array elements 0 to 7 all fit in a single 64 byte cache line.
The two existing enemies in our game, the Angry Cactus, which is a static enemy, and the Zombie, which is a moving enemy.
Task to implement a new enemy, the Teleporting Robot.
Our game’s enemy inheritance tree, with EnemyTeleportOnHit inheriting from EnemyMove.
Every function in our game takes in some input data, then transforms it into output data.
The Move() function’s input is the enemy position, direction, and velocity. The transformation is our calculation of the new position. The output is the new position.
To make our enemy track the player, we just add a function that sets the enemy’s direction toward the player. Our input is the enemy position and the player position. The transformation is calculating a new direction for the enemy. The output is the new direction.
To add our new Robot Zombie, we just add a function that teleports the player to a new location if it is hit. Our input is the damage the enemy received, if any, and whether it should teleport if hit. The transformation is calculating a new position if the enemy is hit. The output is either the new position if hit, or the old position if the enemy is not hit.
To show an enemy in the correct position, we pass in the enemy’s GameObject and its position. We transform our data by assigning the GameObject’s position to the enemy. The output is Unity rendering our GameObject in the correct position.
Task to implement a new enemy, the Zombie Duck.
To determine what velocity we should set our enemy, we are going to take in four variables: the enemy position, the player position, the distance we need to check against, and the new enemy velocity. Our logic will calculate the distance between the player and the enemy and check it against the input distance. The output is the new velocity for the enemy based on the logic result.
With OOP, in an ideal situation, we start the project by spending time setting up systems and inheritance hierarchies so future features will be quick and easy to implement.
With OOP, what usually happens is that the more features we already have, the longer it takes to add a new feature. For every new feature, we need to take into account the complicated relationship between existing features.
With DOD the time to add a new feature is linear because we don’t need to take into account the existing features. All we need is the data for the feature, and what logic we need to transform the data.

Summary

  • With Data-Oriented Design we get a performance boost by structuring our data to take advantage of the CPU cache.
  • Your target CPU may have multiple levels of cache, but the first level, called the L1 cache is the fastest.
  • The L1 cache is the fastest because it is small and is placed directly on the CPU die.
  • Retrieving data from L1 cache is up to 50 times faster than accessing main memory.
  • To avoid having to retrieve data from main memory, our CPU uses cache prediction to guess which data we are going to need next and places it in the cache ahead of time.
  • Data is pulled from memory into the cache in chunks called cache lines.
  • Practicing data locality by keeping our data close together in memory helps the CPU cache prediction retrieve the data we’ll need in the future into the L1 cache.
  • Placing our data in arrays makes it easy to practice data locality.
  • With Data-Oriented Design we can reduce our code complexity by separating the data and the logic.
  • Every function in our game takes input and transforms it into the output needed. The output can be anything from how many coins we have to showing enemies on the screen.
  • Instead of thinking about objects and their relationships, we only think about what data our logic needs for input and what data our logic needs to output.
  • With Data-Oriented Design, we can also improve our game's extensibility by always solving problems through data. This makes it easy to add new features and modify existing ones.
  • Regardless of how complex our game has become, every new feature can be solved using data. This allows for near-constant development time regardless of how complex our game has become and makes it easy to add complicated new features.
  • ECS is a design pattern sometimes used to implement DOD. Not all ECS implementations are DOD, and we don’t need ECS to implement DOD.

FAQ

What is Data-Oriented Design (DOD)?DOD is a coding style that solves problems using a data-first approach. It organizes data to leverage modern CPU architecture and separates data from logic, yielding better performance, reduced code complexity, and improved extensibility compared to traditional OOP.
How does DOD improve performance?By grouping the data a function needs contiguously so the CPU can read it from the L1 cache instead of main memory. Cache hits (1–2 ns) are far faster than main-memory accesses (50–150 ns), so arranging data for cache-friendly access can deliver dramatic speedups.
What are cache hits and misses, and why do they matter in games?A cache hit occurs when requested data is already in the CPU cache; a miss means it must be fetched from slower memory. Game logic runs every frame; minimizing misses keeps the CPU busy doing work instead of waiting on memory.
What is a cache line and how big is it?A cache line is the fixed-size chunk the CPU fetches from memory (and the nearby bytes after it). Common sizes include 64 bytes (typical), 32 bytes (e.g., Pixel 4a), and 128 bytes (e.g., M2 MacBook Pro). The line size is the same across L1–L3. Because lines are address-aligned, the exact field order matters less if all needed fields fit within a single line.
What is data locality and how do I achieve it?Data locality means placing data used together close together in memory. In DOD, use contiguous arrays (structure-of-arrays): Position[], Direction[], Velocity[], etc., and process them in tight, linear loops (e.g., MoveAllEnemies). Share these arrays across multiple functions to keep locality.
Why can an OOP Enemy.Move() be slower than a DOD approach?In OOP, each object bundles many unrelated fields, so per-enemy Move() often triggers multiple cache misses (position, direction, velocity), every frame. In DOD, arrays make access predictable and contiguous, so one cache line can serve multiple elements and iterations.
How does separating data and logic reduce code complexity?Write small, focused, stateless functions that take inputs and produce outputs (transformations). No inheritance trees or cross-cutting class relationships—just pass the needed data. Examples: Move (pos, dir, vel → pos), TrackPlayer (enemy pos, player pos → dir), TeleportOnHit (wasHit, flag → pos).
How does DOD improve extensibility over long projects?Adding features scales roughly linearly: determine the data the feature needs, add that data (often a new array), and write the transform function. You avoid refactoring deep hierarchies, so adding a new enemy type years later costs about the same as early in development.
Is DOD just premature optimization?No. DOD writes code that is naturally cache-friendly from the start, avoiding later performance crises and risky refactors caused by scope growth, SDK changes, or missed device testing.
What is ECS and how does it relate to DOD (especially in Unity)?ECS is a design pattern often used to implement DOD. Entities index items, Components hold attributes (e.g., position, direction, velocity), and Systems run logic. ECS itself isn’t about caches; it just organizes data for DOD. The book teaches DOD without mandating ECS, but later shows how to plug DOD code into Unity’s ECS.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free
choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free