In part 1 of this article, we considered the general concept of a build pipeline and its advantages, and we learned why build pipeline solutions that are standards in other development disciplines don’t work as well in game development.
Now, let’s take look at how game developers can use a build pipeline in their projects.
WHAT BUILD PIPELINE OPTIONS DOES A GAME DEVELOPER HAVE?
There are many different tools and solutions to help game developers with their builds. Let’s take a look at some of the options I have experience with. Of course, other options could be mentioned as well, like Plastic SCM, but for this article I will stick to the ones I have used or am currently using.
CD server systems (Jenkins, TeamCity, etc.)
Once installed and configured, these types of CI/CD systems are in my opinion the best and “easiest” solution in game development. The hardware is in your own hands and can be adapted, updated, and maintained accordingly. You can check for errors and analyse them directly on the system, which – with the appropriate number of nodes – can scale well. However, the maintenance effort and the necessary adaptation in case of system/engine updates needs to be emphasised as well, as this is a recurring effort that should not be underestimated. Apart from that, such systems can easily be transferred to and re-used for other projects.
Engine build Systems (Unity Cloud Build, Lumberyard Waf Build, etc.)
Many engine vendors provide their own build pipeline solution for their engine. Unfortunately, this is often associated with extra costs, so that many refrain from using these solutions for indie or small projects. Furthermore, the in-house solutions are usually poorly extensible or designed only for certain target platforms. So, for example, if you want to address the store after building the package or extend the build process with additional pre- or post-steps, this is simply not possible. These solutions do not offer much more than the build process and the creation of the package itself.
Custom solutions with remote server nodes
Most of the time, using purchased nodes as build nodes is a very good alternative which can be scaled perfectly. You can simply build your setup on an externally hosted system and upgrade/downgrade the nodes as needed. Unfortunately, there are always three big problems with this solution:
- Running costs demanded by the provider for the top-tier computers
- Rather mediocre network traffic capability of the nodes, which can only load the data with difficulty due to access and provider restrictions
- Internal security concerns since the data and, if applicable, credentials are openly stored on the external systems
Despite these disadvantages, custom solutions are useful if you maintain them properly and if they are not too specific.
CI solutions based on OS virtualisation (Bitrise, Git CI, Docker, etc.)
Bitrise and Git CI create completely new nodes to start the build process, on which you have to install your engine, its license, and additional platform-specific tools. On top of the actual build process, which can take 20 minutes and longer, the setup gets added with every build job, which can get annoying pretty quickly.
Bitrise has been trying to find solutions together with Unity since 2017, but they are still struggling with it. Unreal also doesn’t leave a good impression in this regard due to its C++ framework, as build times quickly explode.
Docker can be useful in certain cases but gets very out of hand when serving multiple platforms. The many Docker images for the various platforms and the different engine version images make a gigantic image fleet necessary.
EVERYTHING GOOD ALSO HAS ITS DOWNSIDES
Unfortunately, creating a good build pipeline also has some negative points that can put a damper on the initial euphoria.
A build pipeline requires maintenance
Besides the time needed for the initial setup for all platforms, with each new engine version, plug-in, or system update, some time is needed for maintenance. Depending on the number of platforms and nodes involved, this can take several days or even weeks. Finding and fixing bugs is usually very difficult with a dedicated system, and it ends up taking several days to reconcile the issue logs with Stack Overflow.
Maintenance can be time-consuming and unexpected, but it’s only necessary for one setup, so it is still worth the effort considering the benefits. Just imagine having to maintain all individual build variants for each developer or tester – suddenly, maintaining your build pipeline doesn’t look that bad, right?
A build pipeline requires additional internal costs
Depending on the build pipeline system and engine, additional costs may include:
- Provisioning costs for 1–x nodes (Mac/Windows/Linux machine)
- License costs for the respective nodes
- Monthly costs for the CD platform itself
- Maintenance and operating costs for your own nodes
These additional expenses may pay off for large projects or companies with many application areas but might be unnecessary overhead for indie or one-off projects.
Looking at all these potential costs may lead you to want to skip build pipelines for small-scope projects. However, having a developer do the work necessary without a build pipeline multiple times a week will probably amount to similar costs.
A build pipeline can raise security issues
In a company where the data and credentials are highly sensitive, an external build pipeline can be ruled out from the start. To make it worse, having a shared account within a build account might not be in compliance with the company’s code of conduct. Those obstacles can reduce a fully automated system to its minimum, where everything has become safe but hard to maintain, for example, having internal-level access only or having to build up non-standardised, highly customised self-made systems.
The issue of security might result in hard discussions between your IT security team and the developers – but you should not bypass or ignore those discussions! They allow you to make your workflow safe and compliant within your environment.
Manual steps can be required
Some plug-ins or tools might require manual steps which you cannot integrate in an automated build pipeline. Steam, for example, requires you to enter a 5-digit code as soon as you want to upload a build from a new environment. Unity wants to configure some internal packages before you can actually use them, which implies that you have to “open” Unity at least once on the system you are using. Solving these problems with a build pipeline can be very frustrating – and might even be impossible in some cases.
The good thing is that at this point, you will know where such issues might appear and which may have to be solved manually.
Setting up a build pipeline always makes sense in the long run. Since you can easily use the pipeline over several projects, the initial effort is minimised with each additional project and a general standard can be established. As soon as the very first code of a project exists, you should set up a build pipeline.
Even if you spend some time setting up and maintaining your build pipeline, the overall time saving still is significant. For example, if I were to individually build all variations for one of our large game projects, I would have to spend around 8 hours! Having multiple nodes in our Jenkins pipeline shrinks the time spent on these tasks to around 2 hours, including building and uploading.
Large game studios have specialised build pipeline teams who take care of its integrity and optimisation throughout the production to save as much time as possible during development and to be able to deliver versions as quickly as possible. These teams can help game developers to focus on their features and streamline the delivery process.
Game engine providers are trying to accommodate their use in the build pipeline more and more, but they are still behind the demand compared to the solutions available in Node.js or native app development, for example.
So, even though setting up a build pipeline requires some effort and the solutions offered by the game engine providers are not ideal yet, you will notice the benefits after the third build, at the latest. In addition, you get this cosy feeling that someone “professional and confident” created these builds.
My rule of thumb therefore is: as soon as you have made or have more than 3 releases in front of you – or have to serve more than one platform – set up a build pipeline!
Senior DeveloperOlaf has been part of Endava for more than 13 years, with his focus being on game development. As a universal game development whizz, he is comfortable working with different platforms, like standalone systems, Nintendo DS, television, or mobile, and frameworks, among them Unity, React, Cocos, Android, and iOS – and the associated variety of programming language skills. Olaf is proficient in a diverse range of game development areas, including UI, workflow, gameplay, integration, and maintenance. Besides his development work, Olaf spends his free time with his family and dog, playing games – digital and analogue, indoors and outdoors – as well as learning to understand the Berlin dialect.
11 July 2023
Boost Your Game’s Success with Tools – Part 2
04 July 2023
Boost Your Game’s Success with Tools – Part 1
25 January 2023
The Joy and Challenge of being a Video Game Tester
14 November 2022
Can Software Really Be Green
17 May 2022
An R&D Project on AI in 3D Asset Creation for Games