Overview

1 Understanding data-oriented design

Game development juggles complex logic, real‑time responsiveness, and ever‑changing requirements across AAA, indie, and mobile platforms. This chapter introduces Data‑Oriented Design (DOD) as a practical way to meet those demands by focusing on data and how it flows through the program, rather than on object hierarchies. DOD delivers three core benefits: higher performance by aligning data with modern CPU behavior, reduced code complexity by separating data from logic, and improved extensibility by solving problems from a data‑first perspective. It also reframes the “premature optimization” concern: DOD isn’t micro‑tuning late; it writes code that is naturally efficient from the start. While Entity Component System (ECS) can be used to implement DOD, it’s a pattern—not a requirement.

The performance discussion centers on memory behavior: CPUs are fast, but waiting on main memory is slow, so keeping the right data in the CPU cache is critical. DOD maximizes cache hits and predictable access by arranging related inputs contiguously and processing them in tight, sequential loops. Concepts like cache lines, cache hits/misses, and cache prediction explain why locality matters. In practice, this means avoiding object layouts that scatter fields across memory and instead storing attributes for many entities in parallel arrays (positions, directions, velocities, etc.), then operating over them in bulk. This structure leverages contiguous memory, improves prefetching, reduces cache misses, and can yield large speedups in everyday gameplay systems.

To reduce complexity and improve extensibility, DOD separates data from logic and treats gameplay as a series of pure(ish) transformations: functions consume input data, compute, and write output data. Without inheritance webs or deep class hierarchies, teams add features by identifying the new data needed and writing the transformation that uses it, keeping the cost of change closer to linear as projects grow. ECS maps neatly onto this mindset—entities as indices, components as data, systems as logic—but the chapter emphasizes staying flexible and solving for the data first rather than committing to a single pattern or file structure. The overarching takeaway: organize for locality, decouple logic and data, and iterate by defining data requirements and writing clear, focused transformations.

Screenshot from our imaginary survival game, with the player in the middle, and enemies moving around.
Our Enemy object holds both the data and the logic in a single place. The data is the position, direction, and velocity. The logic is the Move() method that moves this enemy around.
On the motherboard, the memory sits apart from the CPU, regardless if it’s in a console, desktop and mobile device. That physical distance, combined with the size of the memory, makes it relatively slow to retrieve data from memory.
The cache sits directly on the CPU die and is physically small. Retrieving data from the cache is significantly faster than retrieving data from main memory.
A single-core CPU with an L1 cache directly on the CPU die.
A 2-core CPU with shared L3 cache
Flowchart showing how the CPU accesses data in a system with three cache levels. If the data is not found in the L1 cache, we look for it in L2. If it is not in L2, we look in L3. If it is not in L3, we need to retrieve it from main memory. The further we have to go to find our data, the longer it takes.
Data is retrieved from main memory in chunks called cache lines. When we ask for data from main memory, the memory manager retrieves the data we need, plus the chunk of data that comes directly after it, and copies the entire chunk to the cache.
When retrieving a cache line from main memory, it is copied to all levels of the cache. In this example it is first copied to L3, then L2 and finally L1. The cache line is the same size at all levels - meaning the same amount of data is copied to every level. L3 can simply hold more cache lines than L2, and L2 can hold more cache lines than L1.
How the member variables of our Enemy object are placed in memory. The position data is placed first, then direction, then velocity. The same order they are defined in the Enemy class. They are packed together in memory without any space between them.
Our cache line will include m_position, m_direction, m_velocity, and whatever data comes right after them. Our cache line is 64 bytes. The variables m_position and m_direction are of type Vector2, which takes 8 bytes. The variable m_velocity is a float, which takes 4 bytes. That means we have 44 bytes leftover, which are automatically filled with whatever data comes after m_velocity.
When our CPU asks for m_position, the Memory Management Unit (MMU) will try to fill the cache line from the nearest address that is aligned with the size of our cache line. If our cache lines are 64-byte long, the cache line will be filled with data from the nearest 64-byte aligned address. In this case, m_position sits at 0x4C and the nearest 64-byte aligned address will be 0x40.
If the data we need does not align with the cache line size, it will need to be split into two cache lines instead of one.
We can see both Move() and TrackPlayer() require the same variables, Enemy Position and Direction, but each one also needs different data as well, Enemy Velocity for Move() and Player Position for TrackPlayer(). When data is shared between different logic functions it makes it impossible to guarantee data locality for every logic function.
Arrays automatically place their data in contiguous memory. All the position array data will be in a single contiguous chunk of memory, as will direction and velocity’s data.
We can see how the position array sits in memory, and how the array elements 0 to 7 all fit in a single 64 byte cache line.
The two existing enemies in our game, the Angry Cactus, which is a static enemy, and the Zombie, which is a moving enemy.
Task to implement a new enemy, the Teleporting Robot.
Our game’s enemy inheritance tree, with EnemyTeleportOnHit inheriting from EnemyMove.
Every function in our game takes in some input data, then transforms it into output data.
The Move() function’s input is the enemy position, direction, and velocity. The transformation is our calculation of the new position. The output is the new position.
To make our enemy track the player, we just add a function that sets the enemy’s direction toward the player. Our input is the enemy position and the player position. The transformation is calculating a new direction for the enemy. The output is the new direction.
To add our new Robot Zombie, we just add a function that teleports the player to a new location if it is hit. Our input is the damage the enemy received, if any, and whether it should teleport if hit. The transformation is calculating a new position if the enemy is hit. The output is either the new position if hit, or the old position if the enemy is not hit.
To show an enemy in the correct position, we pass in the enemy’s GameObject and its position. We transform our data by assigning the GameObject’s position to the enemy. The output is Unity rendering our GameObject in the correct position.
Task to implement a new enemy, the Zombie Duck.
To determine what velocity we should set our enemy, we are going to take in four variables: the enemy position, the player position, the distance we need to check against, and the new enemy velocity. Our logic will calculate the distance between the player and the enemy and check it against the input distance. The output is the new velocity for the enemy based on the logic result.
With OOP, in an ideal situation, we start the project by spending time setting up systems and inheritance hierarchies so future features will be quick and easy to implement.
With OOP, what usually happens is that the more features we already have, the longer it takes to add a new feature. For every new feature, we need to take into account the complicated relationship between existing features.
With DOD the time to add a new feature is linear because we don’t need to take into account the existing features. All we need is the data for the feature, and what logic we need to transform the data.

Summary

  • With Data-Oriented Design we get a performance boost by structuring our data to take advantage of the CPU cache.
  • Your target CPU may have multiple levels of cache, but the first level, called the L1 cache is the fastest.
  • The L1 cache is the fastest because it is small and is placed directly on the CPU die.
  • Retrieving data from L1 cache is up to 50 times faster than accessing main memory.
  • To avoid having to retrieve data from main memory, our CPU uses cache prediction to guess which data we are going to need next and places it in the cache ahead of time.
  • Data is pulled from memory into the cache in chunks called cache lines.
  • Practicing data locality by keeping our data close together in memory helps the CPU cache prediction retrieve the data we’ll need in the future into the L1 cache.
  • Placing our data in arrays makes it easy to practice data locality.
  • With Data-Oriented Design we can reduce our code complexity by separating the data and the logic.
  • Every function in our game takes input and transforms it into the output needed. The output can be anything from how many coins we have to showing enemies on the screen.
  • Instead of thinking about objects and their relationships, we only think about what data our logic needs for input and what data our logic needs to output.
  • With Data-Oriented Design, we can also improve our game's extensibility by always solving problems through data. This makes it easy to add new features and modify existing ones.
  • Regardless of how complex our game has become, every new feature can be solved using data. This allows for near-constant development time regardless of how complex our game has become and makes it easy to add complicated new features.
  • ECS is a design pattern sometimes used to implement DOD. Not all ECS implementations are DOD, and we don’t need ECS to implement DOD.

FAQ

What is Data-Oriented Design (DOD) in the context of game development?DOD is a data-first way of writing game code. You organize data to match how modern CPUs actually process it, keep data and logic separate, and design functions that transform input data into output data. The goals are faster performance (via better memory access), lower code complexity, and easier extensibility as features grow.
How does DOD improve performance on modern hardware?By arranging data so the CPU can read it from fast cache instead of slow main memory. Grouping and processing data in contiguous blocks (for example, arrays) increases cache hits and reduces memory stalls. Batch-style functions that operate over arrays (like updating all enemies at once) let the CPU stream data efficiently, often yielding large speedups.
What are cache hits and cache misses, and why do they matter?A cache hit occurs when needed data is already in the CPU’s cache and can be read in a few nanoseconds. A cache miss forces a fetch from slower memory, costing tens to hundreds of nanoseconds. Games do huge volumes of simple math each frame; maximizing hits and minimizing misses has an outsized impact on smoothness and frame time.
What is a cache line and how does alignment affect performance?Memory is fetched into the cache in fixed-size chunks called cache lines (commonly 64 bytes, but platform-dependent). When you access one value, nearby values in the same line arrive “for free.” If related data fits within a single line, order matters less; if it spans multiple lines due to size or address alignment, you’ll incur extra fetches and slowdowns.
What does “data locality” mean and how can I achieve it?Data locality means placing data that’s used together close together in memory so it’s loaded in the same cache line. You can improve it by: - Storing attributes in contiguous arrays (Structure of Arrays) - Processing data in tight loops over those arrays - Passing just the arrays a function needs (inputs) and writing back outputs This aligns memory access with how the CPU prefetches and caches data.
Why does DOD favor arrays and batch functions over object methods?Arrays guarantee contiguous memory, which the CPU can stream efficiently. Instead of calling Move() on each object (which scatters reads), a single function like MoveAllEnemies(position, direction, velocity) iterates linearly over arrays, yielding far fewer cache misses and better throughput.
How does separating data from logic reduce code complexity?With DOD, data lives in simple structs/arrays, and logic is written as clear, focused functions that take inputs and produce outputs. You stop modeling inheritance trees and object relationships and instead think in terms of transformations. This makes code easier to reason about, test, and modify.
How does DOD improve extensibility as a project grows?Adding features becomes a matter of defining the minimal data required and writing the transformation logic. Because you don’t depend on deep hierarchies or cross-cutting object relationships, new features don’t entangle with old ones. The time to add features stays closer to linear instead of growing exponentially.
Is DOD just premature optimization?No. DOD isn’t micromanaging hot loops after the fact; it’s an architectural approach that naturally aligns data layout with CPU behavior from day one. This avoids late-stage refactors when performance problems surface and ensures features remain performant as scope and dependencies evolve.
What is ECS, and how does it relate to DOD?ECS (Entity-Component-System) is a design pattern commonly used to implement DOD. Entities are identifiers (e.g., array indices), Components are the data (like position or velocity arrays), and Systems are the logic that transforms that data. ECS encourages contiguous data and batch processing, but DOD doesn’t require ECS—you can apply DOD principles without any specific pattern.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free
choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free