Overview

Appendix B. DOD vs. OOP architecture performance example

This appendix builds a controlled game simulation to compare data-oriented design (DOD) with an object-oriented (OOP) approach by measuring how many enemies can be simulated at a steady 60 fps. The test removes player interaction, adds enemy–enemy collision work to keep rendering effects constant, and uses an adaptive spawning algorithm: average the frame time over a fixed window, double the spawn rate while the average stays above roughly 59 fps (to avoid float precision pitfalls), and roll back to the last good count and reset the spawn rate when it dips below. Supporting data structures in GameData track the current enemy count, a history of “good” counts, and per-frame delta times so outliers can be ignored and averages computed reliably, while Balance holds tunables like max enemies, velocity, and the averaging window length.

The OOP version introduces an EnemyOOP MonoBehaviour that owns its state (position via transform, direction, velocity) and logic (Update for movement and wall bounces, a HandleCollision method for pairwise collision/response). A pooled set of EnemyOOP instances is toggled active/inactive as the target count changes, and a board TickOOP pass coordinates collision checks and count adjustments each frame. The DOD version keeps largely the same game flow but stores positions, directions, and other state in tightly packed arrays and applies the same collision and response math over those arrays. Both simulations share the TryChangeEnemyCount logic and pool-management patterns, and a small game harness switches between modes, sets a high target frame rate, runs the appropriate tick per frame, and updates simple UI text.

Across multiple devices, the DOD implementation consistently supports roughly an order of magnitude more enemies at 60 fps than the OOP one. The performance gap stems from data locality and cache-friendly iteration in DOD: contiguous arrays keep hot data in L1 cache and minimize stalls, while scattered per-object state in OOP increases cache misses and wait time. Practical lessons include preallocating pools to their maximum size, minimizing garbage creation (especially with UI strings), computing fps from per-frame deltas over a fixed window, using squared distances for collision tests, setting the engine’s target frame rate above 60 for headroom, and always validating results on the actual target hardware.

Figure B.1 Screenshots of our simulation running on an iPhone 16 Pro. The left screenshot shows the main menu where we can select to run either the DOD or OOP simulations. The middle screenshot shows the DOD simulation running after maximizing the number of enemies. The rightmost screenshot is the OOP simulation running after maximizing the number of enemies.
Figure B.2 Explanation of how collision detection and response work in our simulation. We use distance to determine if two enemies have collided, then calculate their midpoint and move them in opposite directions.
Figure B.3 The logic we use to maximize the number of enemies on screen while maintaining 60fps.
Figure B.4 We start the game with an enemy count of 0, and spawn one enemy, for a total of one enemy on the screen. Then we spawn two enemies for a total of three, then four for a total of seven, etc. We continue spawning double the number of enemies until our fps drop below 60. When that happens, we drop our spawn count to one and reduce our enemy count to the last amount above 60 fps. This way, our algorithm should find the maximum number of enemies it can simulate while maintaining 60fps.
Figure B.5 All our enemies are the same size, so we only need the radius data to calculate whether two enemies are touching. The distance between the centers of two enemies will be twice their radius if they are touching.
Figure B.6 Instead of calculating the square root to determine the distance, we can just square the distance.
Figure B.7 Once the collision between two enemies is detected, we move them away from each other as if they never collided, so we don’t mistakenly calculate them as having collided again in the next frame.
Figure B.8 OOP vs DOD simulation result on four different devices. For each device, the left screen is the OOP simulation, and the right screen is the DOD simulation. The results show that we can simulate roughly 10x more enemies using data-oriented design.

B.6 Conclusion

The best way to see how much data-oriented design can improve our game is through real-world examples. Creating a simulation from the game we wrote in Chapters 4 and 5, using DOD nets us roughly a 10x improvement over OOP in terms of performance. All we did was structure our data using arrays to leverage data locality, just as we learned in Chapters 1, 2, and 3.

FAQ

What is the goal of the DOD vs. OOP simulation and how is performance measured?The simulation maximizes the number of enemies actively interacting while maintaining 60 fps. It measures average frame rate over FPSFrameCount frames (e.g., 60) using per-frame delta times. If the average is above the threshold, it increases the enemy count; if it falls below, it rolls back.
How does TryChangeEnemyCount decide when to add or remove enemies?Every frame it stores dt and decrements FPSFrameCount. When FPSFrameCount hits zero, it computes avgFPS. If avgFPS > 59.0f (or EnemyCount == 1), it records the current EnemyCount in EnemyCountGood, doubles SpawnRate, and increases EnemyCount (clamped to MaxEnemies). Otherwise, it sets SpawnRate = 1 and reverts EnemyCount to the last good value from EnemyCountGood.
Why use the EnemyCountGood array and how do the ++/-- operators work with it?EnemyCountGood stores all “good” counts (at/above 60 fps) so the algorithm can revert when performance dips. Pushing uses post-increment: EnemyCountGood[EnemyCountGoodCount++] = EnemyCount. Popping uses pre-decrement: EnemyCount = EnemyCountGood[--EnemyCountGoodCount]. This treats the array like a stack while respecting zero-based indexing.
Why store all per-frame DeltaTime values instead of just an average?Keeping an array allows analyzing fastest/slowest frames and excluding outliers. It also avoids floating-point error accumulation from incremental averaging and aligns storage with per-frame data. A generous capacity (e.g., 10,000) prevents accidental overflows in this simulation.
Why check against 59.0f instead of exactly 60.0f fps?Floating-point precision can represent 60 as something like 59.9f. Using > 59.0f avoids rejecting valid “60 fps” measurements due to representation error.
How are enemy-to-enemy collisions detected and resolved?Enemies are circles of equal size. Detection compares squared distance between centers to squared diameter (avoids sqrt). On collision, it computes the midpoint, moves both enemies outward to just beyond one diameter (e.g., * 1.01), and sets their directions away from the midpoint so they separate.
What’s the core difference between the DOD and OOP versions, and why is DOD faster?OOP uses scattered MonoBehaviour objects, leading to poor data locality and more cache misses. DOD stores positions/directions in contiguous arrays, leveraging CPU cache prefetching and L1 locality. In tests, DOD simulated roughly 10× more enemies at 60 fps.
How are object pools used in both simulations?Both preallocate pools up to Balance.MaxEnemies to avoid runtime allocations. The OOP pool holds EnemyOOP instances and toggles GameObject active states. The DOD pool holds plain GameObjects plus a parallel bool array (m_enemyActiveDOD) to track which are visible/active, updating transforms from array data each frame.
Why set Application.targetFrameRate to 120 for this test?To ensure the game can run above 60 fps while searching for the maximum sustainable enemy count. A lower target or VSync cap could mask the true headroom and distort results.
How is dynamic enemy count applied across the code?Loops that process enemies use gameData.EnemyCount instead of a fixed balance value, ensuring movement, collisions, and visual activation all scale with the current count. When the count changes, the board shows/hides pooled objects and updates the UI accordingly.

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free
choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • renews monthly, pause or cancel renewal anytime
  • renews annually, pause or cancel renewal anytime
  • High Performance Unity Game Development ebook for free