guide

what is this again?

This is a guide about how to use cpp-toolbox in a more holistic point of view, along with how you should combine the subprojects together to make stuff. First read this:

The engine doesn't come in big piece, you grab the parts you need to make your game (like a toolbox) in that sense it is a modular engine. Also this engine is not for people who don't know how to program in that sense it is made to speed up regular game development for programmers that use c++.

Be able to create 3d games with high performance, and compatibility for older hardware, it provides utilities that wrap around existing technology such as opengl for graphics, openal for sound, enet for multiplayer and jolt physics. The wrappers are developed to an extent such that you can focus on creating games rather than the implementation.

cpp-toolbox also follows the unix philosophy in that each component of this engine has been separated out and tested in isolation and does its job well, then these pieces can be plugged into eachother to create a complex system safely (this is facilitated using git submodules)

getting started

In order to make games you need to be able to create images on a screen, play sounds and handle keyboard and mouse input. To make the images we use opengl, to make the sounds we use openal and for the input we use glfw. The subprojects are usually based around these three concepts and either relieve pain-points of existing tools or to implement non-existant functionality.

input

glfw's callbacks for input handling are nice, but you can only access game related information through the user pointer, over time your callbacks might need more state over time, and sneaking all your data through the user pointer starts to get ugly over time, this problem was solved using the glfw-lambda-callback-manager, where you can specify your callbacks via lambda functions so you can do whatever you need to do in them without having to stuff the user-data pointer full of things.

graphics

Graphics wise I would recommend learning from LearnOpenGL, as it will give you a good starting point to understand cpp-toolbox. After going through the basic tutorial, you'll start to see that working with shaders is painful. The reason why shaders are so painful is due to the following:

To account for these shortcomings we wanted a system that would make sure that the user wouldn't ever have to type the name of a uniform variable when activating and binding uniform variables, we also wanted to automatically setup vertex attribute variables. Thus we created the shader-standard this subproject makes it so that each variable you use is one of the pre-defined variables in the standard, this makes it so that we can build a system which can interact with shaders via enums meaning that we're prone to far less errors when interacting with shaders.

Additionally we've implemented #include's in glsl shaders so that you can import other files into your shaders to improve readability and increase code re-use. This was done by adding a "compilation" step to your shaders to first process the include statements before using the generated shaders.

model loading

If you followed along with the learnopengl site you might end up building a model loading system which is tightly coupled with opengl, in cpp-tbx we go against that because model loading isn't only for graphics rendering, for example when loading in models, we also might want to load them into the physics simulation and in such a case it wouldn't need any texture information or else it would be wasting resources, especially in the context of a server simulation which doesn't even have a gui.

scripted events

The scripted event system is one of the most important systems in games, for programatic things such as when a rock hits the ground, synchronizing this event with a sound and some particle effect is usually quite easy, we just have a on collision callback and run those effects then, all at once.

There are other types of events that can occur in-game, where there these events are "scripted". In this context just think of a film directory who has a pyrotechnic at hand, a sound guy over the radio etc... when they start the scene they need to make sure that all of those events are synced together because for example having real flames that might burn down a building would not be safe for the actors.

A scripted event is the exact same thing but in the context of a game engine where you're trying to amke a realistic scene and it requires that interact with multiple sub-systems and synchronize their activation. With such a system you can handle reload animations, sord swings, explosions, cutscenes, menus and anything that might need to run on its own without having to be brought to life via some in-game callbacks.

the common game rendering pipeline

Suppose we had ball which is moving in the air, as a beginner in opengl, you might compute the absolute position of each vertex of the ball overtime and upload those new vertices each time to the GPU, and while this might work fine in a smaller game, if you did this each time the camera moved on the geometry of a map, it would quickly become the bottleneck of your rendering.

Since you've probably already run into the CWL v matrix transformation you know that it's way better to instead just rotate the entire world around the camera which solves the problem above as now you just upload the vertex position data to the gpu once, and then each time only the matrix which holds the yaw and pitch of the camera will change.

If you continue with this idea it can become clear that the best way to render all of the objects in your scene is to instead have the transform matrix for each object, keep the vertex positions stationary, and instead just upload the changing matrix each frame, which would reduce the bottleneck.

Since you understand the graphics pipeline then you know our shaders work per-vertex and thus for each vertex we need to know which matrix should be transforming it, the most naive way of doing this would be copy pasting the same model matrix for each vertex in an object and passing that into the shader to have access, this is a horrible idea because for each vertex of 3 floats you're now storing 16 extra floats which is more than a x4 increase in data going in as compared to just re-uploading all the vertices in their new positions, so clearly this new approach is incorrect.

Since cpp-toolbox is currently supporting opengl3.3 as we want support on old machines, then since we don't have access to ssbo's then we would only have ubo's at our disposal, but even then on my machine I'm typing this on right now I only have space for 1024 mat4's in a ubo which is cutting it pretty close for comfort when you potentialy have a couple particle emitters running at any time. Therefore an approach that we can take is one of storing matrices into texture arrays.


edit this page