Book Summary: Game Programming Patterns – Optimisation Patterns

This is the last post of the Game Programming Patterns book summary series. The first and second posts are about some selected patterns from the Gang of Four book. The third one covers the Sequencing Patterns. The fourth post is about Behavioural Patterns. The fifth condenses the Decoupling Patterns section. This post summarises the Optimisation Patterns section.


Book Summary: Game Programming Patterns

  1. Patterns Revisited, Part 1
  2. Patterns Revisited, Part 2
  3. Sequencing Patterns
  4. Behavioural Patterns
  5. Decoupling Patterns
  6. Optimisation Patterns

Optimisation Patterns

These are the ways to make our game use less memory and run with less number of operations with less time.

Data Locality

This is an attempt to make objects loaded to the CPU cache in the order of their usage to minimise cache miss. This optimisation is rather an overkill, so make sure we need to do this by using a profiler first and it is decided that cache miss is what makes our game slow. This approach sacrifices some abstraction like inheritance and interfaces because we need to make our class defines how data laid out in the memory.

Implementations:

  • Save actual objects in an array instead of their references. Also make sure we use the array that is implemented in continuous memory, like the C# Array and not the List<>.
  • Move active objects that need processing to the early array indices to avoid iterating whole arrays. Save the index of the first inactive or the last active object in the array. An object activates? Swap it with the first inactive object in the array and increment the index. Deactivates? Swap it with the last active object in the array and decrement the index.
  • When implementing a class separate frequently accessed data (hot data) from the occasionally accessed ones (cold data). Put hot data directly in memory, while we can just save a reference to an internal class that holds the cold data.
  • To maximise memory usage of the arrays, don’t use polymorphism. Instead, assign each class to their respective array.
  • There are some approaches to define entities and their components. Components are stored in contiguous arrays according to their types. Then, entities:
    • Entities store pointers to the components.
    • Entities store the component’s hash ID.
    • Entities are not classes. It’s just a hash ID. The components stores this ID. The functionality of an entity is replaced by an “Entity” component.

Dirty Flag

Avoid recalculation for a derived value from another value (primary value) that is frequently modified if the derived data is not always needed. Just mark the modified primary data as dirty. Do recalculation only when derived data is needed and primary data is dirty. Otherwise, just use the cached derived data.

The Dirty Flag pattern is to be used when primary data changes constantly and recalculation of the derived data is expensive. When we defer the calculation too long, the more expensive the calculation will probably be. We must also mark the dirty flag correctly every time because it is almost impossible to tell which cache is valid simply from the data itself. Memory usage rises because the cache needs to be stored somewhere.

Design decisions:

  • When the dirty flag is cleared?
    • When the result needed. This approach avoids recalculation entirely if the derived data is never needed. But, if the recalculations are too time-consuming, they can trigger a noticeable pause.
    • Well-defined checkpoints. We can hide the process behind a loading screen. Just make sure the checkpoints are well distributed.
    • Let a background thread do the cleanup. We need to make the data concurrent friendly.
  • How fine-grained is the dirty flag?
    • More exact. We get to only calculate the changed data.
    • Treat chunks of data as one unit. If there is an overhead of data processing, we do them less.

The Dirty Flag pattern is common in physics engines. If a body doesn’t move by a fixed threshold of time, mark it as asleep and skip its physics calculation.

Spatial Partition

This is the pattern to efficiently locate objects by storing them in a data structure organised by positions. Spatial Partitioning is commonly used for collision detection, locating objects in a Scene Graph, and storing navigation data for AI agents.

Design decisions:

  • Is it flat or hierarchical?
    • Flat. It’s simple, memory usage is constant and updating moving object is fast.
    • Hierarchical. Handles better varying object sizes and level of details.
  • Does the object inside affect the partitioning?
    • No. E.g. Flat, grid partitions. Moving objects are easy but partitioning might become imbalanced. Too many objects in one partition or vice versa defeat the purpose of partitioning.
    • Both partitioning and hierarchy are affected by the object inside. E.g. kD-tree, binary search partitioning (BSP). Partitioning is always balanced by design. Handles varying object sizes better. Recalculation for moving objects is expensive, hence it’s good for static objects.
    • Partition is independent, hierarchy isn’t. E.g. Octree. This is a good compromise of features between grid-based and binary search tree-based approaches.
  • Are the partitions the only place an object lives?

The Spatial Partition patterns and their analogous generic data structure:

  • A flat grid is an array.
  • BSP, kD-tree and bounding volume hierarchy are binary search trees.
  • Octree and Quadtree are trees.

Object Pool

Improve access time and memory usage by reusing objects from a fixed pool instead of allocating memory every time a new object is needed. An Object Pool is nice to have when:

  • We need to create and destroy objects frequently.
  • Objects are similar in size; the object doesn’t hold too much attributes that need dynamic allocation.
  • We need to avoid fragmentation; our program needs to run without stopping for a long time.
  • The object encapsulates something that is expensive to be reacquired, e.g. a database or network connection.

Keep in mind:

  • Every unused object is wasted memory.
  • The number of objects is limited. The mitigations:
    • Prevent it from happening by design. E.g. limit the number of items the player can hold in the inventory.
    • Don’t create objects or force destroy an existing object. This works in particle systems when missing objects are not affecting the experience.
    • Create an overflow pool dynamically.

We can reuse memory in unused objects to refer to the next object available to use. The C# StructLayoutAttribute can be used to re-layout memory just like C++ union struct. This way, we can define how an active object and inactive object can hold a different kind of data in the same place.


That’s all for the Optimisation Patterns section. This post concludes my series of post of Game Programming Patterns book summary.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.