Appendix B. DOD vs. OOP architecture performance example
This appendix builds a controlled game simulation to compare data-oriented design (DOD) with an object-oriented (OOP) approach by measuring how many enemies can be simulated at a steady 60 fps. The test removes player interaction, adds enemy–enemy collision work to keep rendering effects constant, and uses an adaptive spawning algorithm: average the frame time over a fixed window, double the spawn rate while the average stays above roughly 59 fps (to avoid float precision pitfalls), and roll back to the last good count and reset the spawn rate when it dips below. Supporting data structures in GameData track the current enemy count, a history of “good” counts, and per-frame delta times so outliers can be ignored and averages computed reliably, while Balance holds tunables like max enemies, velocity, and the averaging window length.
The OOP version introduces an EnemyOOP MonoBehaviour that owns its state (position via transform, direction, velocity) and logic (Update for movement and wall bounces, a HandleCollision method for pairwise collision/response). A pooled set of EnemyOOP instances is toggled active/inactive as the target count changes, and a board TickOOP pass coordinates collision checks and count adjustments each frame. The DOD version keeps largely the same game flow but stores positions, directions, and other state in tightly packed arrays and applies the same collision and response math over those arrays. Both simulations share the TryChangeEnemyCount logic and pool-management patterns, and a small game harness switches between modes, sets a high target frame rate, runs the appropriate tick per frame, and updates simple UI text.
Across multiple devices, the DOD implementation consistently supports roughly an order of magnitude more enemies at 60 fps than the OOP one. The performance gap stems from data locality and cache-friendly iteration in DOD: contiguous arrays keep hot data in L1 cache and minimize stalls, while scattered per-object state in OOP increases cache misses and wait time. Practical lessons include preallocating pools to their maximum size, minimizing garbage creation (especially with UI strings), computing fps from per-frame deltas over a fixed window, using squared distances for collision tests, setting the engine’s target frame rate above 60 for headroom, and always validating results on the actual target hardware.
Figure B.1 Screenshots of our simulation running on an iPhone 16 Pro. The left screenshot shows the main menu where we can select to run either the DOD or OOP simulations. The middle screenshot shows the DOD simulation running after maximizing the number of enemies. The rightmost screenshot is the OOP simulation running after maximizing the number of enemies.
Figure B.2 Explanation of how collision detection and response work in our simulation. We use distance to determine if two enemies have collided, then calculate their midpoint and move them in opposite directions.
Figure B.3 The logic we use to maximize the number of enemies on screen while maintaining 60fps.
Figure B.4 We start the game with an enemy count of 0, and spawn one enemy, for a total of one enemy on the screen. Then we spawn two enemies for a total of three, then four for a total of seven, etc. We continue spawning double the number of enemies until our fps drop below 60. When that happens, we drop our spawn count to one and reduce our enemy count to the last amount above 60 fps. This way, our algorithm should find the maximum number of enemies it can simulate while maintaining 60fps.
Figure B.5 All our enemies are the same size, so we only need the radius data to calculate whether two enemies are touching. The distance between the centers of two enemies will be twice their radius if they are touching.
Figure B.6 Instead of calculating the square root to determine the distance, we can just square the distance.
Figure B.7 Once the collision between two enemies is detected, we move them away from each other as if they never collided, so we don’t mistakenly calculate them as having collided again in the next frame.
Figure B.8 OOP vs DOD simulation result on four different devices. For each device, the left screen is the OOP simulation, and the right screen is the DOD simulation. The results show that we can simulate roughly 10x more enemies using data-oriented design.
B.6 Conclusion
The best way to see how much data-oriented design can improve our game is through real-world examples. Creating a simulation from the game we wrote in Chapters 4 and 5, using DOD nets us roughly a 10x improvement over OOP in terms of performance. All we did was structure our data using arrays to leverage data locality, just as we learned in Chapters 1, 2, and 3.
High Performance Unity Game Development ebook for free