OOP is dead, long live OOP

posted in 22 Racing Series for project 22 Racing Series
Published October 19, 2018
Advertisement

edit:

Seeing this has been linked outside of game-development circles: "ECS" (this wikipedia page is garbage, btw -- it conflates EC-frameworks and ECS-frameworks, which aren't the same...) is a faux-pattern circulated within game-dev communities, which is basically a version of the relational model, where "entities" are just ID's that represent a formless object, "components" are rows in specific tables that reference an ID, and "systems" are procedural code that can modify the components. This "pattern" is always posed as a solution to an over-use of inheritance, without mentioning that an over-use of inheritance is actually bad under OOP guidelines. Hence the rant. This isn't the "one true way" to write software. It's getting people to actually look at existing design guidelines.

Inspiration

This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras!

You can find Aras' slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground.

I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations.
Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version!
TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too).

I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure:

  1. Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules).
  2. Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson).
  3. Show that the relational model is a great fit for games (but call it "ECS").

This structure grinds my gears because:
(A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good,
but more importantly:
(B) it has the side effect of suppressing knowledge and unintentionally discouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers.

Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own.
I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind.

I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code.
OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code.

Background

As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP":

  • Abstraction
  • Encapsulation
  • Polymorphism
  • Inheritance

I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them. In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules...

  • Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C".
  • Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes).
  • Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation.
  • Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice
  • Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them.
  • Not included in the SOLID acronym, but I would argue is just as important is the:
    Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required.

This gives us SOLID-C(++) ?

From now on, I'll refer to these by their three letter acronyms -- SRP, OCP, LSP, ISP, DIP, CRP...

A few other notes:

  • In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software.
    • As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition.
    • Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation.
  • Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance.
    • In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword.
    • In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword.
    • OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell!

And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation).

  1. When you were learning about hierarchies / inheritance, you probably had a task something like:
    Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person!
    Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first.
  2. When you were learning about hierarchies / inheritance, you probably had a task something like:
    Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square?
    This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance.
    • If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool.
      From this perspective, the following makes perfect sense:
      struct Square { int width; }; struct Rectangle : Square { int height; };
      A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle!
      • As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever.
        A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width".
        By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle.
      • Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width;
        This will work correctly for squares (producing the sum of their areas), but will not work for rectangles.
        Therefore, Rectangle violates the LSP rule.
    • If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other.
    • So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go!
      • For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is:
        struct Shape { virtual int area() const = 0; };
        struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; };
        struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; };
        • "public virtual" means "implements" in Java. For use when implementing an interface.
        • "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it.
      • I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it!

TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time!

Entity / Component frameworks

With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point.
Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional.

I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae

// ------------------------------------------------------------------------------------------------- 
// super simple "component system" 
class GameObject; 
class Component; 
typedef std::vector<Component*> ComponentVector; 
typedef std::vector<GameObject*> GameObjectVector; 

// Component base class. Knows about the parent game object, and has some virtual methods. 
class Component 
{ 
public:    
    Component() : m_GameObject(nullptr) {}
    virtual ~Component() {}        
    virtual void Start() {}    
    virtual void Update(double time, float deltaTime) {}

    const GameObject& GetGameObject() const { return *m_GameObject; }   
    GameObject& GetGameObject() { return *m_GameObject; }    
    void SetGameObject(GameObject& go) { m_GameObject = &go; }   
    bool HasGameObject() const { return m_GameObject != nullptr; } 
    
private:    
    GameObject* m_GameObject; 
}; 

// Game object class. Has an array of components.  
class GameObject 
{ 
public:    
    GameObject(const std::string&& name) : m_Name(name) { }    
    ~GameObject()    
    {        
        // game object owns the components; destroy them when deleting the game object        
        for (auto c : m_Components) 
            delete c;    
    }    
    
    // get a component of type T, or null if it does not exist on this game object    template<typename T>    T* GetComponent()    
    {        
        for (auto i : m_Components)        
        {            
            T* c = dynamic_cast<T*>(i);            
            if (c != nullptr)                
                return c;        
        }        
        
        return nullptr;    
    }    
    
    // add a new component to this game object    
    void AddComponent(Component* c)    
    {        
        assert(!c->HasGameObject());
        c->SetGameObject(*this);        
        m_Components.emplace_back(c);    
    }        
    
    void Start() 
    { 
        for (auto c : m_Components) 
            c->Start(); 
    }    
    
    void Update(double time, float deltaTime) 
    { 
        for (auto c : m_Components)
            c->Update(time, deltaTime); 
    }     
    
private:    
    std::string m_Name;    
    ComponentVector m_Components; 
}; 

// The "scene": array of game objects. 
static GameObjectVector s_Objects; 

// Finds all components of given type in the whole scene 
template<typename T> static ComponentVector FindAllComponentsOfType() 
{
    ComponentVector res;    
    for (auto go : s_Objects)    
    {        
        T* c = go->GetComponent<T>();        
        if (c != nullptr)            
            res.emplace_back(c);    
    }    
    
    return res; 
} 

// Find one component of given type in the scene (returns first found one) 
template<typename T> static T* FindOfType() 
{    
    for (auto go : s_Objects)    
    {        
        T* c = go->GetComponent<T>();        
        if (c != nullptr)            
            return c;    
    }    
    
    return nullptr; 
}

Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead.

What does this code do? Well, nothing good :D

To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x.

What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules:

  • the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components".
  • GameObjects fulfill the service locator pattern - they can be queried for a child component by type.
  • Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject.
  • Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects).
  • A GameObject may only have one component of each type (some frameworks enforced this, others did not).
  • Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update".
  • GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components).

This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today.

However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework! We'll re-add this "feature" in a later follow-up post, in a way that doesn't cost us a 10x performance overhead...

Let's evaluate this code according to OOD:

  • GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells.
  • GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games...
  • I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect.
  • Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation.
  • The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use.
    However, it's not great as DIP -- many of the components do have direct knowledge of each other.

So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow.

Frameworkless composition (AKA using the features of the #*@!ing programming language)

If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job.

Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c
Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp
Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp

The gist of the changes is:

  • Removing ": public Component" from each component type.
  • I add a constructor to each component type.
    • OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized.
  • I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent.
  • I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes.
  • Fix the "virtual void Update" anti-pattern.
  • Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction.

The objects

So, instead of this "VM" code:

    // create regular objects that move    
    for (auto i = 0; i < kObjectCount; ++i)    
    {        
        GameObject* go = new GameObject("object");        
        
        // position it within world bounds        
        PositionComponent* pos = new PositionComponent();        
        pos->x = RandomFloat(bounds->xMin, bounds->xMax);        
        pos->y = RandomFloat(bounds->yMin, bounds->yMax);        
        go->AddComponent(pos);        
        
        // setup a sprite for it (random sprite index from first 5), and initial white color        SpriteComponent* sprite = new SpriteComponent();        
        sprite->colorR = 1.0f;        
        sprite->colorG = 1.0f;        
        sprite->colorB = 1.0f;        
        sprite->spriteIndex = rand() % 5;        
        sprite->scale = 1.0f;        
        go->AddComponent(sprite);        
        
        // make it move        
        MoveComponent* move = new MoveComponent(0.5f, 0.7f);        
        go->AddComponent(move);        
        
        // make it avoid the bubble things        
        AvoidComponent* avoid = new AvoidComponent();        
        go->AddComponent(avoid);        
        s_Objects.emplace_back(go);    
    }

We now have this normal C++ code:

struct RegularObject 
{ 
    PositionComponent pos; 
    SpriteComponent sprite; 
    MoveComponent move; 
    AvoidComponent avoid;    
    
    RegularObject(const WorldBoundsComponent& bounds) : move(0.5f, 0.7f) // position it within world bounds 
    , pos(RandomFloat(bounds.xMin, bounds.xMax),      
    RandomFloat(bounds.yMin, bounds.yMax)) // setup a sprite for it (random sprite index from first 5), and initial white color 
    , sprite(1.0f,         1.0f,         1.0f,         rand() % 5,         1.0f) { } 
}; ...   
    
// create regular objects that move 
regularObject.reserve(kObjectCount); 
for (auto i = 0; i < kObjectCount; ++i) 
    regularObject.emplace_back(bounds);

The algorithms

Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just:

    // go through all objects    
    for (auto go : s_Objects)    
    {        
    	// Update all their components        
    	go->Update(time, deltaTime);

You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire.

Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits)

// Update all positions 
for (auto& go : s_game->regularObject) 
{ 
	UpdatePosition(deltaTime, go, s_game->bounds.wb); 
} 

for (auto& go : s_game->avoidThis) 
{ 
	UpdatePosition(deltaTime, go, s_game->bounds.wb); 
} 

// Resolve all collisions 
for (auto& go : s_game->regularObject) 
{ 
	ResolveCollisions(deltaTime, go, s_game->avoidThis); 
}

The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series.

Performance

There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)!

dod-chart1.png.0c63b0194df0f816028ace719b898cd3.png

Next steps

There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when... :D

52 likes 67 comments

Comments

agleed

Maybe you're going to address this in the next part you mentioned, but languages could really use more neat tools to avoid all the boilerplate you have to write and read.

With methods like update and generic things like 'GameObject' you're tempted because you never have to touch that code again, and the codebase 'magically' accounts for everything when you add a new component type. And the 'main loop' that just calls the updates is never a real source of errors, as opposed to the specific style where you can (and this happens to me a ton) actually just forget to add the actual function call which may cost you a precious sanity on a bad day.

I guess a good metaprogramming language and a way to 'tag' functions/methods and variables/attributes (templates certainly aren't this) so you can look them up in the metaprogramming language later is all you'd need there, but C++ doesn't have it...

As you pointed out, the thing about 'static so a programmer has to change it' vs 'dynamic so designers can change it with a tool' is not really a thing considering the tool can theoretically spit out generated code in whatever language you need, but again the ecosystem bites you because actually doing that is a lot more complicated (if you want to be able to change stuff while the game/engine is running) than writing something data driven and loading/reloading a bunch of text files instead. 

October 08, 2018 02:28 PM
JTippetts

I'd upvote this even more if I could. Thanks, Hodgman, you're a beautiful person for going above and beyond like this.

October 08, 2018 02:53 PM
Oberon_Command

The trouble is that pretty much everyone I've talked to who learned OO in university learned the "bad" OO - and given that I've spoken to some fairly recent graduates, this is still ongoing. What do we need to do to remedy this? Do we need to get actual programmers applying to be "intro to OOD" guest lecturers?

October 08, 2018 02:56 PM
swiftcoder

Every "OO" codebase I've had the pleasure to work on in the last 6 years in the software industry has been a disaster of every form of "bad OO". And it hasn't been uncommon for me to be the only person on the project who has any recognition of that fact. Outside of GDNet I know... maybe 8-10 total software engineers who could name the issues with "java-style OO" that Hodgman lays out.

I'm not sure it's even feasible to tackle the university pipeline when so much of the software industry doesn't take this stuff as a given.

October 08, 2018 03:33 PM
Luhan M.

As a beginner, the only thing I can say is thank you for bringing this kind of content.

October 08, 2018 04:00 PM
Oberon_Command
45 minutes ago, swiftcoder said:

Every "OO" codebase I've had the pleasure to work on in the last 6 years in the software industry has been a disaster of every form of "bad OO". And it hasn't been uncommon for me to be the only person on the project who has any recognition of that fact. Outside of GDNet I know... maybe 8-10 total software engineers who could name the issues with "java-style OO" that Hodgman lays out.

I'm not sure it's even feasible to tackle the university pipeline when so much of the software industry doesn't take this stuff as a given.

Maybe we just need a snappy new name. Natural languages being what they are, the meanings of words tend to evolve; if enough people agree that OO refers to what we're all calling "bad OO" in this thread, then that's what "OO" means, regardless of what we think of the matter. :(

October 08, 2018 04:21 PM
Krohm

Woah! So, is ECS still trending? That's just sad.

October 08, 2018 04:45 PM
ozirus

Thank you for this very interesting article which points at a lot of valid concerns about poor OOP/OOD usage.

I understand it is still a work in progress but I am a bit concerned about cache utilization in the current proposed solution versus the ECS / relational model one.
I might have missed some details but from a quick glance at the code, it looks like game objects such as AvoidThis are laid out sequentially in memory.


 std::vector<AvoidThis> avoidThis;

and iterated to update positions like this:


for (auto& go : s_game->avoidThis)
{
    UpdatePosition(deltaTime, go, s_game->bounds.wb);
}

(and once again later to update the sprite data.)

With a definition like this:


struct AvoidThis
{
    AvoidThisComponent avoid; // 1 float = 4 bytes
    PositionComponent pos; // 2 floats = 8 bytes
    MoveComponent move; // 2 floats = 8 bytes
    SpriteComponent sprite; // 3 floats + 1 int + 1 float = 20 bytes
};

and with 64 bytes cache lines, we can fit less than 2 game objects per cache line.
To update the positions, we would only need the PositionComponent and the MoveComponent allowing 4 game objects per cache line.

Am I missing something? Is this something you intend to work on?

Another concern I have is about the disparition of the "Systems" (compared to ECS). By bringing the behavior closer to the data (i.e. by moving the update functions back into the components), you get further away from the "Where there is one, there are many." principle, making it more difficult to optimize with knowledge at a global level (e.g. use spatial partitioning for collision avoidance).

I am also a bit curious about the difference in memory size compared to ECS. Part of it lies in the extra entity sets (std::vector<EntityID>) maintained per system (which I think would also be needed in the OOD version if disjoint sets were not sufficient to solve the problem) but the OOD solution doesn't store a name per entity while the ECS one does. Doesn't this explain a good part of the difference?

Considering the amount of improvements you suggest at the end of the article, I am quite confident you are already aware of these issues and am eager to read the rest of the series.

Thanks again for a great article.

October 08, 2018 04:53 PM
Iltis

A problem with your take on ECS-like in OOP:

Refering to this:

Quote

The implementation of an - let's call it - *entity*:


struct RegularObject
{
	PositionComponent pos;
	SpriteComponent sprite;
	MoveComponent move;
	AvoidComponent avoid;
	
    RegularObject(const WorldBoundsComponent& bounds)
		: move(0.5f, 0.7f)
		// position it within world bounds
		, pos(RandomFloat(bounds.xMin, bounds.xMax),
		      RandomFloat(bounds.yMin, bounds.yMax))
		// setup a sprite for it (random sprite index from first 5), and initial white color
		, sprite(1.0f,
		         1.0f,
		         1.0f,
		         rand() % 5,
		         1.0f)
	{
	}
};

The composition of the entity is fixed at compile time. Using pointers or booleans to invalidate a component, it is possible to enable and disable them at runtime.

However it is impossible to add or remove components other than those specified -> Runtime Composability is gone.

  • You have to add a member for every component you want your entity to ever have over the course of the game at compile-time - or you fall back on inheritance and add a Component class that every component needs to inherit, leaving you with run-time casts.
  • Some of these components might even only be used at one point under a certain circumstance, though, so that's wasted memory, even if you only use pointers.
  • Furthermore, if you end up implementing a new component later on, you have to revisit every Entity/Object/whatever that should have this component, and add it manually.

Some implementations of relational data/"ECS" rely on runtime composability, as they configure systems not by taking entities as input, or - worse - entities working on themselves, but by having e.g. a Storage-Vector (not necessarily a Vector, ... ). This Storage-Vector is coupled with a BitSet, holding all entity ID's that have this component, and when the system runs, it skims through the BitSets of the components it requests and finds those that belong to the same Entity, and ONLY picks them up if every component is present on this entity.

October 08, 2018 06:06 PM
EarthBanana
51 minutes ago, Iltis said:
However it is impossible to add or remove components other than those specified -> Runtime Composability is gone.

I think this was specifically addressed in the article as a sort of downside - but then he made a point the c++ is not really meant to be an interpreted language like this and thats what scripting languages are for.

It would be cool to see (maybe in the next blog entry?) a way to do this where run time dynamic components are possible - maybe using lua or something

It certainly makes the code less confusing and is more commonsense to write it this way. Easier to debug and profile too. Lots of benefits - I guess you would have to decide if you want to give up the dynamic game object or not. It seems like it wouldn't be too difficult to implement that portion (for some kind of editor or tool) in a scripting language as he mentioned.

October 08, 2018 07:03 PM
Oberon_Command
1 hour ago, Iltis said:

The composition of the entity is fixed at compile time.

You're presupposing that it shouldn't be.

I would say that having runtime composability as a default is overengineering as long as you're writing a game (not an engine). If you need runtime composability for something, you can introduce it in small, controlled amounts where it is appropriate to solve the problems at hand, and you can do it in better ways than "entities with pointers to components". An approach I've found works fairly well is to divide entities up into types based on which components I know they'll have at compile time, then use separate "services" (in ECS terminology, "component arrays" coupled with "systems") to decorate instances of those specific entity types with extra state they need at runtime. If taken to extremes, this can result in something that looks like ECS, but was designed with a different approach than "throw all the components for all the entities ever in their own arrays."

October 08, 2018 07:27 PM
Iltis
41 minutes ago, Oberon_Command said:

I would say that having runtime composability as a default is overengineering as long as you're writing a game (not an engine).

I was under the assumption that the code was meant to be reusable.

If you only want to write one game with it, then I agree, it is a bit much.

 

1 hour ago, EarthBanana said:

It would be cool to see (maybe in the next blog entry?) a way to do this where run time dynamic components are possible

Just to clarify: I meant adding and removing existing components at runtime, not creating completely new ones (although that might be interesting in some cases)

October 08, 2018 08:05 PM
Oberon_Command
2 hours ago, Iltis said:

I was under the assumption that the code was meant to be reusable.

If you only want to write one game with it, then I agree, it is a bit much.

Depends on your definition of "reusable." As a gameplay programmer, I find the important kind of reusability is being reusable in multiple situations in the same program, not reusable across projects. Concerns like rendering, audio, and  animation (ie. "engine code") are a different story. The reusable parts in the examples we're talking about would be the "components," in any case, not the entities. This would be true regardless of whether the composition was compile-time or run-time, or whether we were reusing them across parts of a game or across different games.

I would not expect games that aren't sequels, reskins, or mods to share a whole lot of gameplay code. Certainly not enough that reusing entity archetypes across projects would be of much interest to me, never mind a driving requirement of my gameplay architecture. Different gameplay begets different code.

October 08, 2018 09:54 PM
Hodgman
7 hours ago, ozirus said:

Thank you for this very interesting article which points at a lot of valid concerns about poor OOP/OOD usage.

I understand it is still a work in progress but I am a bit concerned about cache utilization in the current proposed solution versus the ECS / relational model one.
I might have missed some details but from a quick glance at the code, it looks like game objects such as AvoidThis are laid out sequentially in memory.

I've already got more content in the Github waiting for blog text to accompany it. One of the first tasks is fixing the memory layouts, yes :)

Sourcing the components from a pool allocator results in pretty much the same memory layout as the ECS version. I'll also switch from using pointers to link components to integer handles at some point, as they're smaller and make the flow of control easier to reason about (important when we get to threading). 

7 hours ago, ozirus said:

Another concern I have is about the disparition of the "Systems" (compared to ECS). By bringing the behavior closer to the data (i.e. by moving the update functions back into the components), you get further away from the "Where there is one, there are many." principle, making it more difficult to optimize with knowledge at a global level (e.g. use spatial partitioning for collision avoidance).

Class static functions can be used to process many objects, while accessing private implementation details and preserving the public interface. This does have the downside that you can only peek into one class' internals at a time though... It's also good C++ advice to liberally use free-functions (in the style of systems!!) when possible. If it's possible to implement a procedure as a free-function instead of a member function (i.e. the algorithm only depends on the public interface), then you should use a free-function. Java and C# got this terribly wrong when they excluded free functions from their design and forced everything to be a member.
We'll get into this more as I try to catch up to Aras' "Update" perf numbers. 

7 hours ago, ozirus said:

the OOD solution doesn't store a name per entity while the ECS one does. Doesn't this explain a good part of the difference?

That's an honest mistake on my part. Because none of the game uses names at all, I didn't notice I'd culled that feature when deleting the EC framework... I'll recompile Aras' ECS code without support for names for make it an apple's to apples comparison. Thanks! 

5 hours ago, Iltis said:

The composition of the entity is fixed at compile time.  Runtime Composability is gone.

Yeah I mentioned this in the article but maybe didn't make a big deal about it enough. This game (and most games) don't need that kind of VM. I'll address this further in another blog. 

Note that even without this component VM, you can still get runtime composition if your language supports it. e.g. You could write your components in C and your entities in Lua! 

5 hours ago, Iltis said:

Furthermore, if you end up implementing a new component later on, you have to revisit every Entity/Object/whatever that should have this component, and add it manually.

That's the same no matter the language / VM that you're using. With a composition VM, you have to edit the JSON files or whatever you serialise your game object templates as... If you add a new feature, you have to plug it in.

4 hours ago, Oberon_Command said:

I would say that having runtime composability as a default is overengineering

^THIS^

If you're writing a magic buff system where you need to add N poison over time components, resistance to ice components, reflect damage components, bonus to fire damage components, skill boost aura components, etc, etc... Then fair enough - build a framework specifically for that situation. Don't use such a complex framework *by default* though -- that's just as bad as using inheritance by default. Write the specific code that you need. 

3 hours ago, Iltis said:

I was under the assumption that the code was meant to be reusable.

The fact that it's designed as many small components (i.e. Follows the SRP) means that the bulk of it is reusable. Whether you define game objects in C, Lua or JSON has no impact on whether it's reusable or not) assuming you have gameplay staff who are competent in C/Lua/JSON).

The SOLID-C rules are designed to encourage you to create code that's easily reusable.

October 08, 2018 11:30 PM
Yixin Xie

From my experience, what data oriented design addresses is the optimizations of the high frequency codepaths. In regards to game programming, it's mainly about tasks that execute every frame or at a very high frequency, like unit movement, AI, animation. So it can be very useful to layout the data in favor of those operations because they run 90% of the time. increasing their cache coherency can improve the overall FPS greatly. As for lower frequency operations, like firing a gun, pressing the jump button, which accounts for 90% of the codebase, are easily written in the OOP manner without a huge performance impact. Can their data be grouped in a cache coherent manner? I don't know. Some ECS/DOD gurus might have the answer.

In short, DOD is at least about grouping high frequency data together.

October 09, 2018 03:47 AM
Oberon_Command
12 minutes ago, Fulcrum.013 said:

Generaly it impossible to increase it chache coherency becouse size of most instances exids size of chache line. For some components its possible to increase code locality using a pools of same kind objects. It works well for passive objects like bullets and so on that can not change trajectory by its descigion. But again it have mach better architecture that not break common architecture with virtual Update. Just pull have a virtual Update method that called once per pool and run a loop to process each of its tiny instances using static dispatch. 

In my mind, if you're going to have a bunch of pools of distinctly-behaving objects, anyway, there's not a whole lot of point to making that Update method virtual, nor much point in calling it "Update." Just name it according to what it does - "MoveBullets" - and put a call to pool.MoveBullets at the appropriate place in the main loop. That way you can see in the code what each pool is doing when. 

Here's a few lines illustrating the principle from a side project I'm working on:


combat.resolve_attacks(collision, status);
hazards.apply_status_effects(status);
characters.update_sprite_data(sprites);
sprites.update_animations(dt);
particles.update_all_particles(dt);

projectiles.update_all_projectiles(dt);
projectiles.resolve_collision(status, collision);

Look how simple this is! I can control the order in which each of the steps happen. No PreUpdate or PostUpdate or deferred update queues. All I need to do to work out when something happens is look at my main loop code to see where it happens. If I had a "virtual void Update" thing going on this code snippet would be a bit shorter (and this is not the complete function!), but tracing execution flow and debugging would be a lot harder. The compiler could probably inline some of these shorter updates, too. Sometimes simple declarative-looking logic is nicer than a 6 line loop.

October 09, 2018 04:06 AM
Hodgman
7 hours ago, Fulcrum.013 said:

If profile data will show that a virtuall calls actually a problem its will also show that its hardware not ready to games or other realtime simulations at all.

The (straw-man) data in the article shows a 10x performance improvement by removing virtual calls (i.e. 90% of the execution time was being wasted). This is running on a typical gaming PC from a few years ago (i7 CPU)... Definitely read for games.

Your other comments seem to be implying that if you don't use virtual / dynamic dispatch, then your alternative is to use static dispatch with O(log N) complexity (e.g. a switch)... That's just swapping the mechanism without fixing the underlying problem. For example in the code in the article, we've removed virtual without adding a switch -- we've gone from O(1) to O(0) by approaching the problem from a different direction.

6 hours ago, Yixin Xie said:

From my experience, what data oriented design addresses is the optimizations of the high frequency codepaths. ... As for lower frequency operations, like firing a gun, pressing the jump button, which accounts for 90% of the codebase, are easily written in the OOP manner without a huge performance impact.

OOP != slow. You can (and should!) use OOD and DOD at the same time. They're orthogonal concerns -- one on writing reusable and manageable software (fits well with human authors), and one on writing software that does the minimal amount of work (fits will with machine hardware).

6 hours ago, Fulcrum.013 said:

(1) Generaly it impossible to increase it chache coherency becouse size of most instances exids size of chache line. (2) For some components its possible to increase code locality using a pools of same kind objects.

(1) No, because (2) Yes.

Generally if you've split up your objects properly, you can (and should) create memory layouts based on the actual patterns of you flow of execution / flow of data.

Also, one cache line isn't the limit for locality. The prefetcher pulls in a lot more data than one line at a time, so it's important to have long sequential runs of hot data... 

October 09, 2018 04:08 AM
Hodgman
10 minutes ago, Oberon_Command said:

If I had given a "virtual void Draw()" method to each of these, the main loop would have less code, but the flow of the rendering code would be a lot harder to follow and debug.

Yep. "virtual void Update" (or Draw) on a pool is 100x times better than doing it per object, but it's still obfuscating the flow of control, and still obfuscating the flow of data. If your main loop is just a loop that calls Update on a list of things, it's impossible to reason about which things have data dependencies on which other things.

By the time I get around to adding threading to this example code, I want to have it in a state where data dependencies are trivial to reason about.

October 09, 2018 04:18 AM
Oberon_Command
6 minutes ago, Hodgman said:

 By the time I get around to adding threading to this example code, I want to have it in a state where data dependencies are trivial to reason about.

That's the other thing - notice that in my sample I'm passing the services/pools that each method needs directly to each method. The dependencies are injected to the method that actually needs them; the individual services and pools don't store them! This makes dependency reasoning really straightforward at the expense of having bigger function signatures (which, oddly enough, has been encouraging me to write updaters that have fewer dependencies).

Sometimes I wish the mainstream engines gave me control over their main loops so I could do stuff like this.

October 09, 2018 04:22 AM
Fulcrum.013
3 minutes ago, Oberon_Command said:

Just name it according to what it does - "MoveBullets" - and put a call to pool.MoveBullets at the appropriate place in the main loop. That way you can see in the code what each pool is doing when. 

Really i have for ecample up to 50 spaseships that each can fire up to 3000 bullets per minute. Each phisical/logical simulation onject have it own set of Update functions for some of actions like recalculate autopilots, recalculate engines, collide and so on. Each object just can unsubscribe action that it put to rest for now by its solo descign. Bullets pool is same object that contain for example a bullets and unguided missiles that run out of a fuel. Other pool contain a unguided missiles that have a working engines yet, from wich missiles shifted to bullets pool after run out of fuel/. Empty pool unsubscribed from any actions. But when it receive a conent it subscribed to required actions. obviously subscription/unsubscribtion works much rare than processing,and active missiles and bullets pools have to be subscribed to different set of actions.As result anything of it just processed as abstract object witout any problems for perfomance, but ready to put-in-model-and-forget objects managment scheme. Really virtual dispatch affect perfomance only in cases called function is tiny, but for heavy functions its just overhead that not worth to account.

October 09, 2018 04:23 AM
Oberon_Command
20 minutes ago, Fulcrum.013 said:

Really i have for ecample up to 50 spaseships that each can fire up to 3000 bullets per minute. Each phisical/logical simulation onject have it own set of Update functions for some of actions like recalculate autopilots, recalculate engines, collide and so on. Each object just can unsubscribe action that it put to rest for now by its solo descign. Bullets pool is same object that contain for example a bullets and unguided missiles that run out of a fuel. Other pool contain a unguided missiles that have a working engines yet, from wich missiles shifted to bullets pool after run out of fuel/. Empty pool unsubscribed from any actions. But when it receive a conent it subscribed to required actions. obviously subscription/unsubscribtion works much rare than processing,and active missiles and bullets pools have to be subscribed to different set of actions.As result anything of it just processed as abstract object witout any problems for perfomance, but ready to put-in-model-and-forget objects managment scheme. Really virtual dispatch affect perfomance only in cases called function is tiny, but for heavy functions its just overhead that not worth to account.

It sounds to me like we're largely in agreement here. Sort your data according to its behaviour! :)

You can even use separate pools for the same "object" (as the user would see it) that's in a different state. Eg. from the same project as the snippet I posted earlier, instead of having a variable on my weapon class that indicates whether it is lying on the ground, being held by a character, or has just been dropped, and instead of representing that with a set of "weapon state" classes with virtual update functions, my "weapon pool" stores three vectors of weapons:


std::vector<WeaponDatum> FreeWeapons;
std::vector<WeaponDatum> BoundWeapons;
std::vector<WeaponDatum> DroppedWeapons;

Then it applies different logic to each one; free weapons just idle and wait for something to pick them up, bound weapons mimic their owner's animations, and dropped weapons start playing the idle animation, then enter the free state. Switching states means moving a weapon from one "sub-pool" to another one. I haven't yet, but there are some further optimization opportunities here - the "free" (ie. not being held or dropped) weapons don't need to store a reference to the character that owns them, for instance.

October 09, 2018 04:42 AM
snake5

After reading most of the article, I felt I had two points to make:

1. I agree that object oriented design doesn't have to be as bad as it usually turns out, and DOD overlaps heavily with relational database theory. However, I think that the most important distinction here is that DOD asks that data (not code) is considered first, whereas OOP assumes data joined with and secondary to its code and asks you to compose them in some subjectively pleasing way.

Ultimately, code is the tool that transforms data, and not the other way around. That said, it is also a tool for communicating processes to various audiences, so I'm waiting for the day that we'll stop thinking in extreme principles and find a way to reconcile both (and possibly other) purposes so that they work together to the extent possible.

2. Can we finally drop SOLID(C) please? Should be obvious by now that some of the rules are vague and infeasible at best and self-serving at worst.

SRP - what is "single"? Is "game" single? Is manager single? Entity, vector, float, byte? Tree, node, part of node? Is multiply+add not single, and what about SIMD? Where do we vote about all of these? Principles that mean nothing specific are only used as a tool to justify screaming about how poor someone else's code is, regardless of whether that is actually the case.

O/CP - in my experience, when the data changes, the interfaces change as well (for optimal access). And similarly, I find implementations to be extremely difficult to substitute without side effects, since they themselves have often very different I/O constraints, performance characteristics and side effects. So I'm not sure in what fairy tale world is interface lockdown or trying to accomodate unforeseen implementations productive.

LSP - in general, there's nothing to object to and people generally try to do this. That said, interfaces are frequently used to bridge inherently incompatible constructs (such as different rendering backends) where support for certain features is limited and it would be pointless to go to extremes to provide perfect feature parity.

ISP - no objections here. Note that this is the only principle for which you have yourself provided a practical argument to support it.

DIP - interfaces exist to bridge multiple implementations, otherwise implementations could be used directly, which is also beneficial for performance due to inlining and lack of virtual calls, and greatly simplifies allocation of data. You say that a POD struct for communication is enough but how is a POD struct different from a regular function parameter list? And I can almost guarantee that a plain function won't be considered DIP-compliant by many people. I would argue that it's merely a somewhat obvious tool used to accomodate refactoring, definitely not a principle.

CRP - while seemingly nice in theory, there is the practical consideration of C++ multiple inheritance casting headaches as well as the issue of composing components that need to know about their parent objects or neighbors. Inheritance, while not without its numerous downsides, solves the parent object issue without extra pointers.

Since you only mention SOLID(C) in your code criticism, and not design, I would argue that the worth of most of these is limited to justifying being annoyed about someone's code, unlike a concrete code example, which also shows an alternative implementation, open to concrete and fact-based comparisons, reviews and future improvements.

October 09, 2018 10:34 AM
Hodgman
1 hour ago, snake5 said:

DOD asks that data (not code) is considered first, whereas OOP assumes data joined with and secondary to its code and asks you to compose them in some subjectively pleasing way

OOP's joining of code + data isn't some subjective voodoo. The separation of implementation details from simpler interface declarations is meant to reduce coupling through thin interfaces (ISP) but also for the purpose of enforcing class invariant. e.g. in C++ vector::resize is a member function, because it alters the internal pointers. There's a lot of invariants involving those pointers, so they're hidden within the implementation so you can easily reason that any bugs with them are caused by that implementation and not user code. On the other hand std::find is not a member function, because it's an algorithm that does not depend on any internal details of a vector.

If the data needs to be controlled to enforce class invariants, then it's internal. If code needs to interact with that fragile data, then it needs to be a member. If code doesn't need to interact with that fragile data, then it should be a free function (not a member).

The Java school of OOP shits all over this idea though and decides that everything should be a member...
In C++ we're taught to try and make as much of our logic as free-functions (not members) as possible.

1 hour ago, snake5 said:

Can we finally drop SOLID(C) please? Should be obvious by now that some of the rules are vague and infeasible at best and self-serving at worst.

Uh, no.

1 hour ago, snake5 said:

SRP - what is "single"? Is "game" single? Is manager single? Entity, vector, float, byte? Tree, node, part of node? Is multiply+add not single, and what about SIMD? Where do we vote about all of these? Principles that mean nothing specific are only used as a tool to justify screaming about how poor someone else's code is, regardless of whether that is actually the case.

I, really don't get what you're trying to say here... If a class has multiple different invariants that it's enforcing, can you draw a line through the class that splits them cleanly, so that some members + invariants are on one side of the line, and other members + invariants are on the other side of the line? If yes, then you can split that class into two. If not, then you can't. If you don't know what invariants your class exists to enforce then, sure, get confused.

1 hour ago, snake5 said:

O/CP - in my experience, when the data changes, the interfaces change as well (for optimal access).

The entire std library would be one example. The interface is agreed upon but there's several different implementations of it that work quite differently.
Or in a game project one time, we ported an Xb360/PS3 game to XbOne/PS4 by isolating the interface between the game and the engine and building a completely new engine underneath that interface.
Or in general, any bit of software that has a public specification (GL, Vulkan, C++, ECMAScript, etc...) follows this guideline.

It's a pretty vanilla guideline really -- "specs are good if they don't change, try your best not to change the spec". Very bland and acceptable...

1 hour ago, snake5 said:

DIP - interfaces exist to bridge multiple implementations, otherwise implementations could be used directly, which is also beneficial for performance due to inlining and lack of virtual calls, and greatly simplifies allocation of data. You say that a POD struct for communication is enough but how is a POD struct different from a regular function parameter list? 

The problem that DIP is solving is layered software.

Typically when designing large scale projects you want to build it up in layers. Everything in Layer 0 only uses other code in Layer 0 ("the core"). Then, you build more specific modules that exist on Layer 1 -- code in those modules only accesses code within their own module plus code in Layer 0 (e.g. graphics depends on core, input depends on core). Then you build even more specific modules that exist on Layer 2 and use all the code below (e.g. the game depends on graphics, input and core).

Instead of building things in vertical Layers, DIP offers a way to build horizontally and shift code down one layer. e.g. if you make an "input data" module that contains a just simple interface (e.g. data descriptions), then the input module and the game module can both depend on the input-data module, and now game no longer has to depend on the input module -- it can be moved down a layer to be horizontal to input.

This isn't just subjective, vague, hand-waving so that people can feel smug in code reviews and tell you that you've been a bad boy. I mean, that's what it sounds like you're complaining about? Some design nazi has used SOLID as a rolled up newspaper to whack you, unproductively, in a code review? If so, yeah that sucks, but don't let that put you off actually learning the useful parts of the theory....

This kind of planning is important on very large scale projects to keep the amount of dependencies between different parts of the code-base low so that you down drown in technical debt later. Reducing the surface area of modules like this, and the connections between modules, is a real concrete outcome.

1 hour ago, snake5 said:

CRP - while seemingly nice in theory, there is the practical consideration of C++ multiple inheritance casting headaches as well as the issue of composing components that need to know about their parent objects or neighbors. Inheritance, while not without its numerous downsides, solves the parent object issue without extra pointers.

Honestly, having two-way dependencies between components and their parents/neighbors is a bit of code smell, and often indicates that there's a bit of a mess of dependencies and an unclear structure. Usually if this kind of data is needed, it can be passed as arguments to the functions that actually do work, instead of being stored permanently in members, which solves the issue without extra pointers too.

1 hour ago, snake5 said:

I would argue that the worth of most of these is limited to justifying being annoyed about someone's code, unlike a concrete code example, which also shows an alternative implementation, open to concrete and fact-based comparisons, reviews and future improvements.

Didn't I post a github link and numeric comparisons of several metrics at the bottom? The original "faux OOP" code is subjectively and objectively terrible.

October 09, 2018 11:23 AM
Gnollrunner

I'm a big proponent of bad/faux OOP in C++...... I use multiple inheritance everywhere, class hierarchies 15 levels deep, templates with 20 parameters where the base class is often a template parameter and I pretty much never never use the standard library.  On top of that I tend to make everything pubic unless I know for sure it should be private upfront, and then I go back and privatize stuff and clean things up after it works (assuming I don't forget).

I prioritize in this order:

1) Elegant  algorithms & data structures

2) Robust memory management

3) Speed

4) The structure of the code an how lit looks to others.

Mind you I'm not trying to sell anyone on my approach :D

October 09, 2018 11:39 AM
snake5
14 minutes ago, Hodgman said:

OOP's joining of code + data isn't some subjective voodoo.

I totally agree, though what I consider to be subjective voodoo are the relationships and structure of objects themselves. This seems to appear the most in classes that are effectively layers of managers, proxies and factories.

20 minutes ago, Hodgman said:

If a class has multiple different invariants that it's enforcing, can you draw a line through the class that splits them cleanly, so that some members + invariants are on one side of the line, and other members + invariants are on the other side of the line? If yes, then you can split that class into two.

The question I ask is, should you split that class? I don't question the ability, as you have defined it, but the productivity of always doing so. Say you have a "human" component with "health" member. Does it make sense to move "health" to a separate component? Even if there is no other component that would need it? Likewise, multiply-add are clearly two separate operations, but are joined for performance reasons.

28 minutes ago, Hodgman said:

It's a pretty vanilla guideline really -- "specs are good if they don't change, try your best not to change the spec".

And a lot of good that did for OpenGL... :) as a particular counterexample, I see AZDO which is almost a self-sufficient subset of the broader, bloated API, which makes it pretty hard to argue that it was merely an extension. STL is for the most part a reasonable example in favor of the principle, though I find that it likely wasn't aimed at STL developers. And it's not like the spec hasn't changed at all over the years. IIRC, string::c_str() was at some point allowed to add the terminating zero at runtime and range-for was changed to allow for a separate end type.

43 minutes ago, Hodgman said:

The problem that DIP is solving is layered software.

Sure, I'm not against separation of software layers, but again, how does that make it a principle?

I'm not sure if there's a term for it, but there is a kind of dependency that is not explicitly written, but when a part is substituted for another with a fully compliant interface, something subtly breaks down regardless. Seems to be particularly common with computational geometry and image processing. Also observed with LLVM when manually specifying optimization passes to run. My point here is that no interface would help in this case, and making one only would cause performance issues (whether compile-time from templates or runtime from virtual interfaces), so how can something that doesn't always work be a principle?

52 minutes ago, Hodgman said:

Honestly, components needing to know about their parents or neighbors is a bit of code smell, and indicates that there's a bit of a mess of dependencies and an unclear structure.

I agree, but like I said, the issue here stems from practical considerations. When writing a game from scratch, all in the same code base and language, without data-driving anything, all such issues can probably be avoided quite easily, otherwise I'm not too sure.

 

All in all, my main point regarding SOLID(C) is that if some theory does not work always, without exceptions and provably so, it does not deserve to be a principle, and should not be applied unquestioningly. And this of course applies to the omnipresent horrible kind of OOP as well.

1 hour ago, Hodgman said:

Didn't I post a github link and numeric comparisons of several metrics at the bottom?

Please don't get me wrong, I did notice it and appreciate that a lot. I'm just saying that in my opinion, it has a great deal more value than all the criticism of original code, and so there should be even more code changes, metrics and reasoning behind the code changes.

October 09, 2018 12:45 PM
Oberon_Command
2 hours ago, snake5 said:

I'm not sure if there's a term for it, but there is a kind of dependency that is not explicitly written, but when a part is substituted for another with a fully compliant interface, something subtly breaks down regardless.

I think the term you're looking for is "leaky abstraction." And I would point out that even if abstractions are ultimately leaky in some way, that doesn't mean we shouldn't have them. They're useful.

2 hours ago, snake5 said:

All in all, my main point regarding SOLID(C) is that if some theory does not work always, without exceptions and provably so, it does not deserve to be a principle, and should not be applied unquestioningly.

I am of the opinion that the search for advice to apply unquestioningly is fundamentally misguided; we shouldn't do anything in software unquestioningly, so I struggle to follow your point here.

There's a reason we call them principles and not laws or dogma. Design principles are not cudgels to bash over the heads of the unfaithful, they're guidelines that suggest what a good course of action might be when trying to solve a problem. Frankly, I struggle to imagine any kind of design advice that works always, without exceptions, and provably so, because there's always an outlier case where the best course of action is something really weird. Even a singleton, as icky as singletons are, is sometimes the right choice to solve a problem where things are un-fixably un-ideal. That doesn't mean we shouldn't have principles for the common case.

October 09, 2018 02:36 PM
snake5
11 minutes ago, Oberon_Command said:

I think the term you're looking for is "leaky abstraction."

Close, but not quite. This is from the perspective of the user of an abstract interface, whereas what I'm thinking about is like, for example (hypothetical), there are two polygon-polygon intersection tests, one is faster but "rounds the corners" a bit, and the other is exact, and the dependency on the other one would come from a game situation where a projectile was moving near a corner, where the algorithm would be expected to miss the intersection, so it could not be replaced without destabilizing the balance of the game. It is something of a butterfly effect, and somewhat related to bugs turning into features after shipping.

21 minutes ago, Oberon_Command said:

I am of the opinion that the search for advice that we apply unquestioningly is fundamentally misguided.

I guess I've managed to create the wrong impression there. :) I am not looking for such advice, it tends to find me itself all the time, which does not seem like how things should be. That said, I don't want to make this about me, it just seemed like once again here comes someone with the one true way to do things.

23 minutes ago, Oberon_Command said:

There's a reason we call them principles and not laws or dogma.

Dogma is merely an authoritative principle. Which means it just takes someone (a famous computer scientist / programmer) or something (Wikipedia) to "authorize" it, or make it trusted, and by that definition all CS "principles" I've encountered are really dogma (although the difference isn't that huge to begin with). And principle is defined as fundamental truth or assumption, which again I would argue these examples are not.

October 09, 2018 03:16 PM
Oberon_Command
1 hour ago, snake5 said:

That said, I don't want to make this about me, it just seemed like once again here comes someone with the one true way to do things.

Ah, I can empathize there. :) I don't think that's Hodgman's intent, at all - it seems to me that he's more claiming that the "OOP" Aras was referring to is a strawman, not saying that SOLID-C is the one true way to code.

1 hour ago, snake5 said:

Dogma is merely an authoritative principle. Which means it just takes someone (a famous computer scientist / programmer) or something (Wikipedia) to "authorize" it, or make it trusted, and by that definition all CS "principles" I've encountered are really dogma (although the difference isn't that huge to begin with). And principle is defined as fundamental truth or assumption, which again I would argue these examples are not.

Hmmm, I've been using both of those words differently. To me a principle is a kind of "normative axiom", not something that has a truth value. The SRP isn't a thing that's "true" so much as it's an idea that we choose to follow because we believe it will yield better software; more generally, principles are norms that we follow in order to promote our values. I will happily ignore a principle if I find I'm in a case where the principle doesn't apply or doesn't advance my values. If I find that a principle ceases to advance my values in most cases- or never did so in the first place - then I tend to discard it. :)

To me what makes a principle "dogma" is how it is applied, not an inherent characteristic of the principle. "Dogma" refers to principles that are applied unquestioningly and universally, oftentimes without really understanding why one applies them in a particular circumstance. I think that covers the case where an "authority" impose a principle on someone who doesn't understand the principle, too.

SOLID-C can certainly be applied dogmatically - because any principle can be.

October 09, 2018 04:27 PM
a light breeze

I tend to think of the runtime-configurable design in the "bad OOP" example as a compromise between the flexibility of a true scripting language and the performance of the hardcoded "good OOD" approach.  Like many compromises, it feels unsatisfactory because it cannot fully deliver on either of its promises.  That doesn't necessarily make it a bad design - it is actually quite successful as a pattern for those cases where a true scripting language is too expensive and the hardcoded approach is too rigid.

(Which is not to say that a better compromise is not possible.  I just find the argument that "rigid approach A is better than flexible approach B because it is faster" unconvincing.  Flexibility is a value in and of itself that is often more important than performance.)

October 09, 2018 05:14 PM
chairbender

Glad to see someone writing a thorough and detailed defense of OO with a concrete example. I definitely agree that much of OO criticism stems from what we would call "bad OO" - either a mistaken understanding of how to effectively use OO or a bad experience with others' OO code.

At the same time, I can't help but wonder if the reality of the situation is that, for the average software dev, OO increases the likelihood of writing bad code. Sure, there are plenty of good OO devs who write good OO code, but maybe they are more the exception than the rule. Maybe there is some alternate methodology which tends to work out better for the people who don't currently write good OO code. Maybe the way OO was taught poisoned people's minds such that they cannot easily get out of their conception of it. Should we focus on helping those people to write better OO, or should we give up trying to teach them OO and let them use some other methodology? In any case, I have to think that, due to the long-standing history of practice of OO, we are better served trying to teach people to do it better and dispell common misconceptions rather than throwing it out entirely (of course, only if the methodology makes sense for the language / framework).

October 09, 2018 06:08 PM
Hodgman
7 hours ago, Oberon_Command said:
9 hours ago, snake5 said:

 it just seemed like once again here comes someone with the one true way to do things.

Ah, I can empathize there. :) I don't think that's Hodgman's intent, at all - it seems to me that he's more claiming that the "OOP" Aras was referring to is a strawman, not saying that SOLID-C is the one true way to code

Yeah I don't mean to come across that way, though I will admit to being a bit of a snarky dick.
On my game we use a blend of DOD and OOD and the relational model, procedural and pure functional, message passing and shared state, immutable objects and mutable objects, stateless APIs and state machines, etc... It's good to have a lot of tools, and it's good to have a lot of theory and guidelines on how to use each of those tools, too.

The point was taking some code that's already been shown to be bad and is being presented as a patsy for OOP being harmful, and showing that applying a few guidelines from OO theory can actually remove the badness. So people shouldn't throw the baby out with the bathwater, and instead should practice their OO skills.

If you take SOLID(C) as a set of guidelines to use along with your own critical thought, then they shouldn't be controversial. 

11 hours ago, snake5 said:

The question I ask is, should you split that class? I don't question the ability, as you have defined it, but the productivity of always doing so. Say you have a "human" component with "health" member. Does it make sense to move "health" to a separate component? Even if there is no other component that would need it? Likewise, multiply-add are clearly two separate operations, but are joined for performance reasons.

MAD doesn't count because it's not a class and shouldn't be a class :)

Health is actually a good example. An OOP beginner might make a private health number field, and then have SetHealth and GetHealth accessors, in case they want to change the logic later... That's bad - it's encapsulation without abstraction. To apply a more general principle of KISS, you shouldn't have that class at all and just represent health as a raw number... 

However, if there's actually a useful abstraction that can be applied, then a class can be worthwhile (more so if it's used in many places - DRY, but less so if it's only required in one place - KISS). For example, instead of Set/Get, maybe you have ApplyDamage(amount, type) and ApplyHealing(amount, type), which internally do some calculations including armour, spells, buffs, etc... Encapsulating the health field in this case makes it easier to reason about bugs with that field - you know which algorithms are responsible for mutations. 

I think we agree though that you should balance multiple contradictory principles with your own critical thought though. IMHO, the KISS principle should often override other ones :)

6 hours ago, chairbender said:

At the same time, I can't help but wonder if the reality of the situation is that, for the average software dev, OO increases the likelihood of writing bad code. Sure, there are plenty of good OO devs who write good OO code, but maybe they are more the exception than the rule. Maybe there is some alternate methodology which tends to work out better for the people who don't currently write good OO code.

Well that's what game engines are doing :)

EC and ECS frameworks add a whole bunch of unnecessary restrictions to your designs that stop you falling into pitfalls, but also hamstring "normal" designs... EC is basically a restricted form of OO, and ECS is a restricted form of relational.

October 10, 2018 12:10 AM
SuperVGA
On 10/9/2018 at 1:30 AM, Hodgman said:

It's also good C++ advice to liberally use free-functions (in the style of systems!!) when possible. If it's possible to implement a procedure as a free-function instead of a member function (i.e. the algorithm only depends on the public interface), then you should use a free-function. Java and C# got this terribly wrong when they excluded free functions from their design and forced everything to be a member.

Isn't it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into the habit of starting to implement every member function as static, and if I discover that using the interface is sufficient, I reevaluate its placement - otherwise I turn it into an ordinary instance member function.

October 10, 2018 09:40 AM
Oberon_Command
5 hours ago, SuperVGA said:

Isn't it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into the habit of starting to implement every member function as static, and if I discover that using the interface is sufficient, I reevaluate its placement - otherwise I turn it into an ordinary instance member function.

Well, in C++ in particular you'd want to avoid static functions in classes, as well, unless those functions needed access to the private fields of the class. There are a couple of reasons for that:

  1. Minimizing compile times. Having a free function doesn't inherently lead to better compile times, but it does give you the option of moving that function out into a different header from the class with a forward declaration of that class, which allows code that uses the functions to avoid including the class header. That means that when you change the class definition, fewer translation units will need to be recompiled.
  2. Promoting encapsulation. The more functionality is in free functions that use the public interface of a class, the less code you need to change when the class's private implementation does, at least in theory. In a lot of cases you can get it so that code depends on the free functions instead of the class itself. This also encourages one to focus a class's public interface on its state invariants, rather than putting significant amounts of behaviour in the class itself.
October 10, 2018 02:54 PM
SuperVGA
15 hours ago, Oberon_Command said:

Well, in C++ in particular you'd want to avoid static functions in classes, as well, unless those functions needed access to the private fields of the class. There are a couple of reasons for that:

  1. Minimizing compile times. Having a free function doesn't inherently lead to better compile times, but it does give you the option of moving that function out into a different header from the class with a forward declaration of that class, which allows code that uses the functions to avoid including the class header. That means that when you change the class definition, fewer translation units will need to be recompiled.
  2. Promoting encapsulation. The more functionality is in free functions that use the public interface of a class, the less code you need to change when the class's private implementation does, at least in theory. In a lot of cases you can get it so that code depends on the free functions instead of the class itself. This also encourages one to focus a class's public interface on its state invariants, rather than putting significant amounts of behaviour in the class itself.

Thanks, aside from the organizational advantages I never considered the build-time benefits of this before.

October 11, 2018 06:00 AM
Gnollrunner
On 10/9/2018 at 9:08 PM, chairbender said:

 In any case, I have to think that, due to the long-standing history of practice of OO, we are better served trying to teach people to do it better and dispell common misconceptions rather than throwing it out entirely (of course, only if the methodology makes sense for the language / framework).

IMO the paradigm doesn't matter that much.  The programmer matters. Half the stuff in any paradigm is trying to protect the programmer from themselves.  For 25+ years I worked in the semi-conductor industry writing internal tools.  Initially almost everyone in our department had a PhD in physics but since we were a CAD department they also had to know how to program. I started out as a software tech but worked my way up.  I was a rare exception. Some of the PhDs were good programmers, some not so much.  Later when we stared hiring CS grads the situation really didn't change, some were a lot better than others.

What I mean by good, is they could debug their own code and find their mistakes and their code was pretty reliable.  There wasn't a lot of focus on programming patters at first, mostly algorithms and the data structures to support them. Some guys on the other hand were just lazy and not meticulous at all, and those people I would have to assist on a daily bases.  This was all functional programming at first, in mostly FORTRAN 77 and C and in my case a lot of ASM. 

They made basic errors like leaving hundreds of lines of warning messages in their code when compiled. It's true that most of these were innocuous but the problem is when you have all these warnings fly by during a compile, the few that mean something you miss. The first thing I always did when helping someone, is have them fix every warning in their code. Sometimes this actually fixed their bug.

The other thing is they were often lazy with arrays bound checking and stuff like that. One of the strangest bugs I ever saw was when someone wrote off the end of a stack allocated array and the data they wrote just happened to also be an valid address in the code. When the function returned it just followed the new address and actually called other functions and ran for a while from there, before eventually crashing because the state was wrong. The program was huge, and the bug only occurred after several minutes of running and we didn't have all the fancy debugging tools at that time.  It messed with me for a couple of days, until on a hunch, I just started searching for local arrays and put bounds checks in.

Another common thing was for people to leave unexplained behavior in they code.  They would call me over to help fix something and then when I pointed out something else seemed off, they would say "yeah it does that. I'll fix that later, it's not important. This other bug is what I need to fix right now.".  I don't think I need to explain why this is bad.

In any case when we moved on to C++, Scheme, Java, OOP, what have you , it was still these same people that had the same kinds of problems. It didn't matter what they were dong, what language they were using, or what paradigm. Nothing really changed for them.  One of these guys was a huge Lisp proponent, loved Emacs (and would bash me for using vi) and talked about design patterns and paradigms ad nauseam.  He was one of the top two offenders.

I guess this has kind of poisoned me to programming holy wars.  I'm not saying there isn't value in comparing paradigms, but I just think at the end of the day it's not the major factor.

October 11, 2018 07:20 AM
Hodgman
23 hours ago, SuperVGA said:

Isn't it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into the habit of starting to implement every member function as static, and if I discover that using the interface is sufficient, I reevaluate its placement - otherwise I turn it into an ordinary instance member function.

Yeah I don't do it regularly, but I've found that by making most methods static to start with, it helps me keep track of which member variables are read/written at which time, what the data flows are, and think about the structure... As mentioned in the other reply you got, free functions are preferable to class-static as it keeps encapsulation intact. 

For complex classes, I do like writing the implementation mostly as a small collection of stateless free functions in the style of traditional "Inputs -> Process -> Outputs" nodes, and then write the actual class implementation as a very thin wrapper around those pure functions - plugging in the right members to the input/output arguments. In some situations these pure functions might be reusable by other classes too, in which case I'd declare them in a header - otherwise I'll make them file-static / hidden. 

October 11, 2018 09:03 AM
Oberon_Command
On 10/11/2018 at 12:20 AM, Gnollrunner said:

Another common thing was for people to leave unexplained behavior in they code.  They would call me over to help fix something and then when I pointed out something else seemed off, they would say "yeah it does that. I'll fix that later, it's not important. This other bug is what I need to fix right now.".  I don't think I need to explain why this is bad.

I can think of at least a couple of arguments in favour of that attitude.

Typically we want individual commits to source control to represent one bugfix or feature. The more stuff you put in a commit or changelist, the higher the likelyhood that your commit will break something and need to be reverted and, if that happens, the more work will be potentially unavailable to the client when your change is reverted. There's also that if you find two problems in the code while you're looking at a bug, and you fix both at the same moment, it becomes harder looking back through the code's version history to see what the actual problem was that caused the bug. When you're fixing a bug, I find it really helps to have a stable (even if flawed) codebase to test your fixes against. If you change one thing at a time, you get a better sense of what your change did than if you change a whole bunch of things all at once.

Fixing stuff as you see it is all well and good, but if it isn't pathologically difficult to test and submit individual bugfixes, then bugfixes should be as separate from one another as possible. Easy branching in source control systems like git make this a lot nicer, but not everyone is using git. Structuring your code well can certainly help in all cases. :)

October 12, 2018 04:36 PM
AnarchoYeasty

Ugh this is great! Thank you! I've recently been "promoted" to Software Architect at work (web dev) and as part of that I've been diving into proper coding techniques so I can clean up the mess that we've been building for a few years. I stumbled upon this article despite not being an actual game dev (but have always wanted to, just never figured out the proper way to do things).  Between this blog post and reading gameprogrammingpatterns I finally have had the "A-ha!" moment where things start to make sense!

Any idea when the next parts of the series are going to be happening? I keep checking back every day hoping (unrealistically) that you have posted more knowledge to share :D

October 14, 2018 05:38 PM
Hodgman
11 hours ago, AnarchoYeasty said:

Any idea when the next parts of the series are going to be happening? I keep checking back every day hoping (unrealistically) that you have posted more

I'm getting my indie game ready to exhibit at PAX at the end of October, so any free would-be-blog-writing-time is probably going to get eaten up by shader-code-polishing instead until then :|

October 15, 2018 04:45 AM
Gnollrunner
On 10/12/2018 at 7:36 PM, Oberon_Command said:

I can think of at least a couple of arguments in favour of that attitude.

Typically we want individual commits to source control to represent one bugfix or feature. The more stuff you put in a commit or changelist, the higher the likelyhood that your commit will break something and need to be reverted and, if that happens, the more work will be potentially unavailable to the client when your change is reverted. There's also that if you find two problems in the code while you're looking at a bug, and you fix both at the same moment, it becomes harder looking back through the code's version history to see what the actual problem was that caused the bug. When you're fixing a bug, I find it really helps to have a stable (even if flawed) codebase to test your fixes against. If you change one thing at a time, you get a better sense of what your change did than if you change a whole bunch of things all at once.

Fixing stuff as you see it is all well and good, but if it isn't pathologically difficult to test and submit individual bugfixes, then bugfixes should be as separate from one another as possible. Easy branching in source control systems like git make this a lot nicer, but not everyone is using git. Structuring your code well can certainly help in all cases. :)

I think there is a difference between a bug, one for which the cause is understood and "unexplained behavior".  I used that combination of words intentionally.  IMHO leaving the latter in your code is asking for trouble. I've seen people bitten by this many times.  If I don't understand why something is happening I go find out.  I have even had occasions where I seemingly fixed something but didn't understand why my change had fixed the problem. In that case I will go put the bug back in and trace it until I understand why it occurred, and if my change really did fix it or if it simply masked some manifestation of the bug.

I'm a firm believer in understanding your code as much as possible.  I don't like to tell someone how to do their job, but on the other hand if they are asking for my help, I refuse to waste my time chasing a possible ghost. If someone can't fix their own code and need me to help them, then I'm in command. If they don't like it, they can find someone else to help them, however in reality I've never had any push-back on that as people tend to appreciate when you are spending your time helping them.

October 15, 2018 05:26 AM
Stragen
6 hours ago, Hodgman said:

I'm getting my indie game ready to exhibit at PAX at the end of October, so any free would-be-blog-writing-time is probably going to get eaten up by shader-code-polishing instead until then :|

I'll be heading to PAX, will have to give the game a run... will see how polished your shaders look in person.

October 15, 2018 11:06 AM
Dim0thy

OOP evolution is analogous to that of Visual Arts - in the early days, it was done in one way and there was a majority agreement on what Art is. Then the modernists came along, scrapped all of that, and announced the new - right way - and in the process denying pretty much the foundation of Art, to the point where it's not so clear what Art is anymore.

I am talking about the original 4 pillars (Abstraction, Encapsulation, Polymorphism, Inheritance) and the additional new 5 'pillars' (SOLID).

OOP has an personality identity disorder, in my opinion : are the original 4 - pillars, or are they not? If 'yes', then the original definition was incomplete, because 5 more pillars have been discovered since (so everyone doing OOP in those days were fooling themselves); if 'no', then that's tantamount to a denial of the very essence of OOP. At first, inheritance was touted as the defining feature. Later on, we see the proponents distancing themselves away from it and preaching composition instead. Nowadays - not sure which one is in the lead.

This apparent denial of earlier self, points to a weakness in the system - inconsistency. If I am promised benefits by using the system, then later am told that there were none but if I use a modified version then I get some, then at that point - the credibility has been lost.

October 19, 2018 12:41 AM
AnarchoYeasty
19 minutes ago, Dim0thy said:

OOP evolution is analogous to that of Visual Arts - in the early days, it was done in one way and there was a majority agreement on what Art is. Then the modernists came along, scrapped all of that, and announced the new - right way - and in the process denying pretty much the foundation of Art, to the point where it's not so clear what Art is anymore.

I am talking about the original 4 pillars (Abstraction, Encapsulation, Polymorphism, Inheritance) and the additional new 5 'pillars' (SOLID).

OOP has an personality identity disorder, in my opinion : are the original 4 - pillars, or are they not? If 'yes', then the original definition was incomplete, because 5 more pillars have been discovered since (so everyone doing OOP in those days were fooling themselves); if 'no', then that's tantamount to a denial of the very essence of OOP. At first, inheritance was touted as the defining feature. Later on, we see the proponents distancing themselves away from it and preaching composition instead. Nowadays - not sure which one is in the lead.

This apparent denial of earlier self, points to a weakness in the system - inconsistency. If I am promised benefits by using the system, then later am told that there were none but if I use a modified version then I get some, then at that point - the credibility has been lost.

Mate, I don't mean to  be rude, but this entire comment reeks of ignorance. OOP was the result of people realizing the potential downfalls of procedural programming, and was the result of continued evolution of ideas during the 60's and 70's. The very idea at the time was huge and changed the game, there is a reason most applications aren't being written in C anymore. Is OOP perfect? No, but for it's uses it is much better than procedural programming. What you call the '5 new pillars' is actually the result of people using OOP and finding some additional insights into better software development. SOLID literally could not have been conceived without first putting the practice of OOP to work. And no, people were not just in denial before SOLID, SOLID is just the fun name created by Robert Martin in 2000. But it was actually the result of a bunch of very intelligent and influential engineers from the 70's-90's who decided that it was time to put the best practices they had all independently discovered over the last 30 years in writing to pass it on to the next generation.

Absolutely no one is suggesting that you shouldn't use Inheritance. It is still one of the defining features of object oriented programming, and solves a whole host of issues people hit during procedural programming. The advice to favor Composition over Inheritance is not to say "Do not use Inheritance" but instead "Use Inheritance correctly". See, Barbara Liskov created the Liskov Substitution Principal, which says that any sub class should be able to be substituted for it's parent's class in an algorithm, without the algorithm needing to be aware that it is not in fact talking to the parent class.  This is really important for writing code that works correctly. However when you have massive inheritance trees, you cannot reliably make that assertion. Thus, people realized that instead of having massive chains of inheritance you should instead compose objects of smaller parts, and those smaller parts may be sub classes of other parent types, or they could be their own thing. Thus, favor composition over inheritance. But inheritance is still important. If you can follow the Liskov Substitution Principal, then it is ok to use inheritance. But if you can't, and the vast majority of cases were not, then you should instead use composition. SOLID does not in any way invalidate OOP, it is a set of additional insights. No one ever claimed OOP was perfect and would solve everything for you.

Also, there is no debate. If you are writing Object Oriented Code, use both the 4 pillars, and SOLID. If you are writing procedural or functional code, then don't use OOP and SOLID. Your argument is essentially that the existence of screws invalidates the need for nails, which is simply not true.

October 19, 2018 01:17 AM
Hodgman
38 minutes ago, Dim0thy said:

I am talking about the original 4 pillars (Abstraction, Encapsulation, Polymorphism, Inheritance) and the additional new 5 'pillars' (SOLID).

The first 4 are tools. The next 5 are architectural advice on use of the tools.

October 19, 2018 01:20 AM
vivichrist

Shouldn't - "unintentionally encouraging readers from interacting with half a century of existing research." be "unintentionally discouraging readers from interacting with half a century of existing research."?

October 19, 2018 02:01 AM
fleabay
On 10/9/2018 at 10:36 AM, Oberon_Command said:

There's a reason we call them principles and not laws or dogma. 

It's the first thing I thought of when I read that. :)

October 19, 2018 02:13 AM
Hodgman
43 minutes ago, vivichrist said:

Shouldn't - "unintentionally encouraging readers from interacting with half a century of existing research." be "unintentionally discouraging readers from interacting with half a century of existing research."?

Thanks. Fixed :) 

October 19, 2018 02:44 AM
Dim0thy

@AnarchoYeasty

Quote

Mate, I don't mean to  be rude

It's cool, my friend - just don't insult me.

Quote

but this entire comment reeks of ignorance

I must admit that that's true to some extent.

Quote

OOP was the result of people realizing the potential downfalls of procedural programming

Note the 'potential' qualifier - it implies that people felt that there *might* be problems with procedural programming in the future; not that there were problems at the present.

Quote

there is a reason most applications aren't being written in C anymore

I respectfully and strongly disagree - C is still being used to write programs, and the language has consistently ranked in the top 5 for the last few years (if not decades). The most prominent and often cited case of a non-trivial program written in C is the Linux kernel - and you might say - yeah, but the decision to use C was made in the nineties; if it were decided now, surely they'd go OOP (i.e. C++). Not so, according to Linus Torvalds - it would still be C! (incidentally, 'git', Linus' other project, is C too). 

At the current job, we write software for network switches, and guess what - it's all C without a shred of C++ (thank god).

There is a perception out there that C is obsolete and nobody uses it anymore, which seems to me like a form of delusion induced by the OOP marketing machine.

Quote

Robert Martin

Oh well - whatever this man says, writes or advocates does not affect my position in any ways; if anything, it makes me more suspicious.

Quote

The advice to favor Composition over Inheritance is not to say "Do not use Inheritance" but instead "Use Inheritance correctly". (...) Thus, people realized that instead of having massive chains of inheritance you should instead compose objects of smaller parts, and those smaller parts may be sub classes of other parent types, or they could be their own thing. (...) But inheritance is still important. If you can follow the Liskov Substitution Principal, then it is ok to use inheritance.

It is this tendency of OOP being revised (refined?) that I take issue with. Ok, so there is this 'inheritance' concept that was developed/discovered. Let's use it! Here's how you use it: <how-to-use-v1>. 10 years later - um, no, actually, that wasnt' right - we got unforseen problems when applied in practice, so let's use it this way : <how-to-use-v2>. 10 years later... See the pattern?? 

Quote

Is OOP perfect? No, but for it's uses it is much better than procedural programming.

You did notice that the body of an a OOP 'method' is procedural code. So OOP can't be 'better' because it's not a separate thing that would sit at the same level and could be compared. Underlying it is procedural code, so it's more like a layer on top.

So maybe you meant : better than procedural programming *without* OOP added to it? No. Allow me to elaborate :

Suppose we have problem P, and two programs: program O - written in OOP style, and program S written in procedural style. Both programs solve the problem P completely and correctly. Compare the programmatic complexity of the two : program O is composed of procedural code + OOP overhead; program S is procedural only. So program S is *simpler* in structure and complexity. The conclusion follows by the 'simple is better' principle.

Quote

Your argument is essentially that the existence of screws invalidates the need for nails, which is simply not true.

I like your analogy, but my argument is more like : if the nails do the job then there's no need for screws.

I would suggest you to read 'Notes on Structured Programming' by E. Dijkstra, but don't read it like a prose - instead read it like a math book :do the exercises, try to get an understanding of the concepts and the proofs. If you are like me, it will give you an appreciation for the beauty, simplicity and power of the ideas presented; it's an eye opener. Then, compare the insights and the clarity of thought you gained, with the fuzziness and flaky concepts that you'd get from the OOP camp. In my opinion - there's no contest.

It's baffling to me that the structured programming paradigm has not had a foothold on the industry as OOP did. I blame it on the marketing machine of OOP (which SP does not have).

 

October 19, 2018 11:18 PM
Oberon_Command
1 hour ago, Dim0thy said:

The most prominent and often cited case of a non-trivial program written in C is the Linux kernel - and you might say - yeah, but the decision to use C was made in the nineties; if it were decided now, surely they'd go OOP (i.e. C++). Not so, according to Linus Torvalds - it would still be C! (incidentally, 'git', Linus' other project, is C too). 

Note that the poster you're quoting said applications. Kernels and embedded software for network switches aren't really what I would call "applications." :) 

Applications are things like Word, Maya, Call of Duty, Steam... Consumer software that runs on top of an operating system.

1 hour ago, Dim0thy said:

It is this tendency of OOP being revised (refined?) that I take issue with. Ok, so there is this 'inheritance' concept that was developed/discovered. Let's use it! Here's how you use it: <how-to-use-v1>. 10 years later - um, no, actually, that wasnt' right - we got unforseen problems when applied in practice, so let's use it this way : <how-to-use-v2>. 10 years later... See the pattern?? 

The software industry is a young field. It's continually evolving. We're constantly learning with everything we do; in fact, I would say that the death of a programmer's career can often be pinpointed by looking for the point at which he stopped learning new things. A programmer who stops learning is the walking dead, doomed to stagnation and to eventually be superseded by the folks who kept growing. I suspect this "how-to-usev2" will eventually be superseded, too.

You make this out like it's a bad thing. Why is that?

October 19, 2018 11:30 PM
swiftcoder
3 hours ago, Oberon_Command said:

Kernels and embedded software for network switches aren't really what I would call "applications." :)

I've always assumed that Linus objection is to the rest of the baggage that goes along with "not C", rather than the OO part. There's quite a bit of OO in the linux kernel.

Besides, Linus clinging to C is a bit of an outlier at this point: operating systems written in high-level languages abound... Symbian was written in C++, and prior to the emergence of iOS/Android, it was the dominant mobile OS. Microsoft Research's Singularity uses a C# kernel and device drivers written over a C++ HAL. Redox is shaping up to be a fairly complete OS written entirely in Rust. And of course, if you go back a bit in computing history, LISP and Smalltalk were each operating systems in addition to programming languages for the machines they were originally developed on.

October 20, 2018 03:23 AM
Denis163

Thanks for very interesting article. But I guess SOLID principles (or it's better to say hints) treatment can be vary. For example what if removing "virtual Update() anti-pattern" is OCP violation? Because for every added game feature you need to edit your main Update() function. It's not cool. But on the other hand "virtual Update version" is not compatible with DOD. But as you can see this "anti-pattern" has not been removed even in the Unity ECS framevork led by Mike Acton

November 13, 2018 09:13 AM
Hodgman
1 hour ago, Denis163 said:

Thanks for very interesting article. But I guess SOLID principles (or it's better to say hints) treatment can be vary. For example what if removing "virtual Update() anti-pattern" is OCP violation? Because for every added game feature you need to edit your main Update() function. It's not cool. But on the other hand "virtual Update version" is not compatible with DOD. But as you can see this "anti-pattern" has not been removed even in the Unity ECS framevork led by Mike Acton

I don't see a link between virtual-void-Update and OCP. OCP says that it should be possible to change the implementation of a class without affecting any of the existing users of its interface. 

With virtual-void-Update, you have a single user that looks like: for each thing, call Update. That's such a non-algorithm... It's so vague as to almost not be an algorithm at all! The dumbness of this algorithm means that the interface shouldn't exist, and if the interface doesn't exist, there's no OCP to worry about :)

However, there are slightly less bad versions of the virtual-void-Update pattern. e.g. you might have lists of objects that need to be "updated" (still dumb) per frame -- instead of having a virtual update per item, you can have one per list.


class GameLoop
{
  typedef std::vector<std::function<void(float)>> OnTickVector;
  OnTickVector m_onTickCallbacks; // things that need to be "updated" per frame

  //our main update loop function
  void Tick(float dt)
  {
    for( auto& update : m_onTickCallbacks )
      update(dt);
  }
}

...
//let's add a new entity type
class Monster
{
  void Update(float deltaTime) { foo++; } // note - not virtual
  int foo;
};

template<class T>
class EntityPool
{
public:
  EntityPool(OnTickVector& onTick) { onTick.push_back( [this](float deltaTime) // basically adds some code to the main Tick function!
    {
      for( auto& m : m_pool )
        m.Update(deltaTime);
    }); }
private:
  std::vector<T> m_pool;
};

//these lines needs to be added to the GameLoop class to add monsters to the update cycle 
EntityPool m_monsters;
GameLoop()
  : ...
  , m_monsters(m_onTickCallbacks)

This is still just as bad of a design as the typical virtual-void-Update architecture, but performance will be way better because you've got a nice non-virtual loop per entity type, and entities are contiguously allocated per type.

November 13, 2018 10:31 AM
trapazza
On 10/9/2018 at 1:39 PM, Gnollrunner said:

I'm a big proponent of bad/faux OOP in C++...... I use multiple inheritance everywhere, class hierarchies 15 levels deep, templates with 20 parameters where the base class is often a template parameter and I pretty much never never use the standard library.  On top of that I tend to make everything pubic unless I know for sure it should be private upfront, and then I go back and privatize stuff and clean things up after it works (assuming I don't forget).

I prioritize in this order:

1) Elegant  algorithms & data structures

2) Robust memory management

3) Speed

4) The structure of the code an how lit looks to others.

Mind you I'm not trying to sell anyone on my approach :D

Just out of curiosity, I'd love to see how having "class hierarchies 15 levels deep" is actually useful.

November 13, 2018 01:21 PM
Gnollrunner
54 minutes ago, trapazza said:

Just out of curiosity, I'd love to see how having "class hierarchies 15 levels deep" is actually useful.

Well what generally happens is I have a base class with a reference counted object, so that's one. 

Then I have some general class with hold a heap address, because I have objects with lots of pointers so I make my pointers 32 bits but address every eight bytes so that gives me a 16 gig heap taking relative addressing into account. This save a lot of memory and lets me do specialized heaps for different kinds of objects. For instance  I have thread safe heaps and an non-thread safe heaps and so forth, and I also use a lot of slab allocation type stuff. So that's two.

Then I'm doing voxel stuff so I have a basic cell which holds geometry, then I have one with bounds, then one that branches, then a voxel itself, then a prism voxel and cube voxel, then a special prism voxel for a sphere, then all those for the physics model and the view model. So that's a few more.

In general I often use the vtable pointer in lieu of a data member. For instance my prisms, have different orientations and a subroutine might have to do something slightly different because of that. So I subclass it maybe into a few child classes.  This way Instead of putting the logic in the routine, I often see if I can make a template function with lots of parameters that customize the behavior.  That way a virtual function call just makes the decision before it goes into the routine and I get by with less logic when it actually runs. To do this I generally have a template with a parameter as a base class

Anyway the class hierarchy levels kind of all adds up. I could probably use multiple inheritance in places or composition, but those have their own issues like pointer shifting for MI and extra message passing for composition.

I use MI sometimes though. For instance I don't really use containers per say. I inherit off of one or more base classes that I call hangers, which implement various kinds of lists. The hangers themselves are templates with an ID which enforce what objects can go into what kind of list.

I'm not really promoting my coding style or anything. It's just how I do things.

 

November 13, 2018 03:09 PM
Mussi

Great article Hodgemen, but shouldn't the title read: OOP is dead, long live SOLID-C? In all seriousness though, OOP is a vague term, I wish people would stop using it.

December 10, 2018 12:39 PM
Tudor Nita

Aras's example has been used in many a production-ready ( to be read as shipped ) games and engines. It's far from a straw-man example IMHO. It seems like an (anti?)pattern that works well with quick turn-around times, ever-changing requirements and the realities of big-business game dev.


Out of curiosity has anyone seen the "proper OOP" example actually used in a non-trivial, modern, title? I am asking from honest curiosity.

January 07, 2019 04:50 PM
Hodgman
3 hours ago, Tudor Nita said:

Aras's example has been used in many a production-ready ( to be read as shipped ) games and engines. It's far from a straw-man example IMHO. It seems like an (anti?)pattern that works well with quick turn-around times, ever-changing requirements and the realities of big-business game dev.


Out of curiosity has anyone seen the "proper OOP" example actually used in a non-trivial, modern, title? I am asking from honest curiosity.

I'm not speaking from an ivory tower :) I've worked on a dozen console games (PS2 to PS4 era), with a few of the older ones being "bad 90's OOP" (Unreal engine 2 was very Java inspired...) and most of the newer ones being actual decent composition based OOP that I'm encouraging here. There's a reason why there's a lot of literature around that theory -- the whole point is to make massive real world projects maintainable and shippable. 

I've done game jams and hobby stuff with unity and do appreciate their entity/component framework for hacking stuff together... But I still think it's an anti-pattern. IMHO, it encourages hidden / non-explicit communication / dependence between components, which in the long term, obfuscates your code and makes it harder to fix, maintain and optimise. They're moving away from that model towards explicit data dependencies now, too :D

Professionally, I've done a few games with Gamebryo in the early 00's, which developed a similar entity/component model to Unity and enjoyed it initially but grew to dislike the model over time, too. 

January 07, 2019 08:01 PM
Tudor Nita

Composition based OOP ( where it makes sense) is great and can see it being used locally ( sub-component level ). Or any other piece of hot-code where you really need to optimize after the system is designed and approved. Losing the abstraction layer of components, for example, seems a bit heavy-handed to me. Would be hard pressed to see our hundreds of components done this way. It could be done, I'm sure, but at what cost to development/ maintenance time? The example as far as I understand it ( based on this tiny snippet ) has a similar problem to unity's ECS framework. It requires an upfront, close-to-complete, system design and is rather inflexible in nature. Any more than a trivial gameplay change requires a programmer and at least one recompile.

Does it not create a host of other complications like designer-specified data, toolset integration, regular inheritance use-cases like automatic script bindings, etc ? One of our major risks is iteration speed and it's only very rarely programming iteration speed ( at least in the mobile world ). Granted, on more constrained platforms with fiercer competition ( to be read as the graphics race ) I can see this as being an affordable compromise

January 07, 2019 09:48 PM
Hodgman
On 1/8/2019 at 8:48 AM, Tudor Nita said:

Does it not create a host of other complications like designer-specified data, toolset integration, regular inheritance use-cases like automatic script bindings, etc ? One of our major risks is iteration speed... Any more than a trivial gameplay change requires a programmer and at least one recompile.

I kind of need to write part 2 of this blog to cover that, but, no. You can get all the things that these kinds of "component frameworks" give you (such as data-driven entities/composition) without having to go down the "everything is a polymorphic component" road. 

On 1/8/2019 at 8:48 AM, Tudor Nita said:

Losing the abstraction layer of components, for example, seems a bit heavy-handed to me.

Depends what do you mean by the component abstraction. If you just mean you don't want to lose particular abilities, like script bindings or data driven entities, then as above, you don't have to. 

If you mean you don't want to lose the service locator pattern (e.g. Parent->GetComponent<T>()), then we're philosophically opposed. IMHO that's an anti-pattern that obfuscates your code and makes it less maintainable in the long run. Again, the point of this stuff is to make software that's easily reusable, fixable, editable, reconfigurable. And there's a lot of theory on how to do that without having to make a "Component" base class. 

January 09, 2019 02:16 AM
nhold

I really liked this article, great job.

Ever since I implemented my own ECS I have realised it was a formalised term for a specific idea of relational data->algorithm setup. I have been trying to discuss this with peeps on reddit for a long time (3 or more years!) and how OOP doesn't mean large inheritance tree. It's refreshing to see someone who can explain in a much better way than me write an article.

Thanks heaps, I'll be linking to this article a bunch.

January 15, 2019 06:46 AM
A4L
A4L

Just a comment / question from a noob that all of this is over my head.. but in the article you talk a lot about "in the wild" and trends "everyone does" and all that. I just wanted to ask... If something has a correct way of being done and it works well... but no one ever dose it correctly... dose it matter if it works well considering only a tiny fraction of people do it correctly to experience its befits anyway?

Like Relativity is more "correct" than the Newtonian descriptions of gravity, but the Newtonian one is easier to calculate and still produces very good results (to a point)... so is it wrong that we use Newtonian equations ... or should we all try and use the Einstein ones that only a tiny fraction of people truly understand properly and are more complex so people make more errors .(this is just a metaphor , it may not work under strict scrutiny, but I think it shows my question)

January 15, 2019 09:02 AM
Alexandros Liarokapis

This is my take on the matter. 

What ECS requires is basically a logical grouping of components and some way to retrieve Views of component tuples from the entities that have them. 

One interesting approach in modern C++ is creating a trait that detects if a component can be retrieved from an instance of a class. Then we can have a tuple of containers of different game objects of the form that you propose and use template metaprogramming to construct Views from the tuple that only loop through the containers that actually have the specified components.

This is more or less what you are proposing in the article although you do this manually, you don't do any template metaprogramming, you immedially act on the containers via their method functions. 

But let's take a closer look at the classes.


struct entityA
{
  ComponentA component_a;
  ComponentB component_b;
  ComponentC component_c;
};

struct entityB
{
  ComponentA component_a;
  ComponentC component_c;
  ComponentD component_d;
};

...

These do not actually provide much abstraction, they are complete aggregates of their components. In fact we can immediately see that most subsets of the available components could be valid entities. 

In our "classic" approach (both yours and the TMP one), what is needed is to know of all the entity types to be used during compilation. No matter if we do this using TMP or manually the result is the same. This is a good solution if you don't plan to have some form of world editor (you could instead use some form of DSL instead) but it breaks down when you do need one. In that case it is still acceptable to have a compile time dependency on components but the specific entities, the specific subsets of the components set need to go. This means that the mapping of entities to components needs to be dynamic and hence the usual entity-as-id approach.

January 29, 2019 03:44 PM
Simplex

"The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series."

Still waiting for this :)

You might also address problems like:
-Unit testing / TDD
-Cache
-Simd
-Parallelism
-Notifying other components / systems about stuff like transform changes (eg: updating culling data only for changed objects)
-Naming and using a single "Update" for components that are involved in multiple behaviours
-Updating components created from data, script, etc
-any other problems addressed in the first part of this video:

https://www.youtube.com/watch?v=W3aieHjyNvw

July 11, 2019 02:32 PM
joseph4

@hodgman
Hey thanks for the provide the 10% argument for OOD,
other post i read is Factorio game devlog, They also use OOD instead of ECS, https://forums.factorio.com/viewtopic.php?t=24569
I'm trying to start doing Gamedev as a hobby, (been programming on other field for a while though).
I keep getting confused on 90% online resource is about Unity game engine and its “new” ECS way.
However I'm personally more comfortable with the uncool but proven way of programming.
Is there any plan you will continue the blog series.. ?
I saw you already push some commit on the GitHub on immutability & allocator there a way to get blog help on that change.
especially on how to destroy existing object.
Thanks again for the help on gameDev OOD.
(Sorry broken English btw)

March 04, 2020 08:59 PM
aganm

Still hoping for a follow up on this! This is very valuable info.

June 15, 2020 07:25 AM
AnarchoYeasty

Part 2 still incoming?

January 25, 2021 10:35 PM
ThatJenkins01

Are there any resources on good ood design or does anyone have links to good “oop” codebases in c++ ?

July 14, 2022 11:25 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement