<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
RSS Feed

Software Engineering | Olaf Spiewok |
16 November 2021

Imagine the following everyday scenario in the life of a game developer:

A sprint has finished, and a milestone must be delivered. You were asked to provide the deliverables for the various platforms, but since it is the end of the sprint, some bugs need to be fixed, documentation requires maintenance, and various meetings are coming up, of course. You try to delegate some of the builds to someone within your team, but they are lacking the latest credentials or build requirements, or they are occupied or on vacation. At the end of the day, you spent all your time building the deliverables yourself and had to work overtime. Hopefully, you did everything right and configured every build setting correctly. Unfortunately, the builds took up 100% of your computer’s capacity, so every other task had to wait. What a great day…

I guess every developer may have experienced this situation many times – no matter if they are game, app or web developers, producers, or testers. What an uncomfortable situation – and a waste of time for everyone.

If only there were something which could take care of this automatically, something which could handle the repetitive, time-consuming process… if only there were something like a build pipeline!

To make things easier in the future, let’s spread some general knowledge about this topic and find out what makes a game development build pipeline so different. First, we’ll take a brief look at why you need a build pipeline, something that may be taken for granted these days in software development, then address the specifics for game development, and finally discuss some technological options for setting up a build pipeline for games.


Generally speaking, a build pipeline is an automation setup which allows you to streamline the development, build and release processes of your projects. It consists of a CI (Continuous Integration) and a CD (Continuous Delivery & Deployment) phase. It ensures that every build is done in a reliable, repeatable way and generates a potential shippable product.

Flow chart
Flow chart based on the one provided by Red Hat


The benefits for development teams can make a huge list, so we’ll highlight the ones that have the highest impact on game development independently of the build environment and platform:

Everyone can create any release version at any time

It saves a lot of communication time if everyone knows how to use the “big red build button” and what parameters they need.

Human errors can be minimised 

Since you might change platforms or configuration options (like version details, compile flags) when you require a different platform build, mistakes can happen too easily. Especially when everyone is working on different features, this can create a lot of unwanted side effects. Everyone in the team needs to have identical platform settings (same XCode, same system, same Android SDK, same libraries, etc.) or must change them for a specific release build. In a project where 5+ platforms need to be served, every human action becomes critical.

Credentials and user access can be restricted

You may want to restrict access within your team, so that not everyone can upload, change, or view all platforms. Credentials might change over time, or two-way authentication logic might be applied on some of the platforms, based on system properties. A build pipeline can be configured and authorised with a special build account which will be the only account that requires all the credentials to access the various platforms. Therefore, no team member needs to have full access to everything or to be informed of all changes. The build pipeline will simply work for them.

Build time and resources can be reduced to a minimum

Everyone simply uses a “red button” build pipeline, which will address a set of nodes and thus create the requested builds. Besides some maintenance, everyone can work on the features and topics they have been planned for.

Tracking and analytics can be applied

Creating a build pipeline in an early project phase allows you to inspect and analyse changes of the release packages over time. When did a feature break the build process? What features increase the build duration or release package size and why? Analysing these changes can streamline your releases. For example, if an app store requires your product to have a specific size, an alert can inform you when this limit is exceeded.

Platform-specific requirements can be handled

Developing for multiple platforms is difficult since you have to make sure that every new feature you add does not break on specific platforms. Not all developers can test any platform variant for every kind of feature or adapt their development configuration to fit the actual build requirements. A build pipeline can inform you when a new feature might break the deployment process, and a tester can then easily verify the changes.

Ensure a clean, complete, and sanitised project integration

Different types of tests from the testing pyramid can run at different phases in the build and release process as post-commit testing, merge-request testing, integration or unit testing and can be used within your development process to exclude expected errors. Missing files which a developer might have forgotten to commit can be identified and corrected. Having someone with a “clean workspace” rebuild the product using the build pipeline will allow you to sanitise your product in every development phase.


I have made the experience that game developers and game engine frameworks handle things quite differently compared to other software development disciplines. Setting up a build pipeline for games should therefore also be considered in its own right.

Before the development of the now common solutions, every platform and developer had to build their own solutions, independently of whether that concerned game, standalone application, mobile, console, or web development. Roughly 20 years ago, such solutions were often based on Cygwin and shell or bash scripts, which simplified manual processing a bit.

Around 10 years ago, the first automated server solutions like Jenkins and Docker made their way into the industry. Based on those ideas, online services like Bitrise, BuildBuddy, Azure Pipelines, and Git CI appeared. While, for example, Node-based developers could now easily choose which kind of build system they wanted to use, game developers still had to wait for the game engines to catch up. The services that were available back then could not handle the complexity and requirements of games, including the various platforms they supported.

Let’s take a closer look at some of these complexities that come with game engines like Unreal, Unity, Lumberyard, etc.

Game engines are not available for every platform

Not every engine can be used on every machine. For example, a combination of Linux with tools like Docker might not be supported by all engines, or only with a high latency (e.g. Unity). This will eliminate many server-based build solutions, and if you find some kind of solution, it will often be based on Homebrew or self-made.

Game engines can have license restrictions

Frameworks like Gradle, Node, native apps, etc. are easily accessible through their open structure, which you can integrate into your build pipeline with plug-ins or shell commands. Game engines, however, require installation packages, which you can only get after registration, or you need to register upon game engine execution. If two-way authentication or a license key is necessary as well, additional effort and coding is required.

Game engines need graphics hardware

Since many engines need to rebuild shaders, lighting information, or textures within their build process, you need graphics hardware (GPU) for your builds. Otherwise, build time will suffer if you only use CPU power, considering that GPU rendering is 50 to 100 times faster than CPU rendering! Many server-based solutions do not supply any graphics hardware, so that only a “full-spec machine” will work here.

Game engines have huge repositories

While most developers deal with repositories with a maximum size of <5GB, games can have hundreds or thousands of GB. Textures, audio files, binaries, lighting, and other assets will be contained in the game and are therefore required for the game build. This means that you need a good network connection and bandwidth on the build environment as well as enough space for temporary compilation data. I still remember the day when we moved a big Unity project to a different Jenkins environment and the IT team broke into a sweat because of the project size and checkout times.

Game engines have long compilation times

Depending on the platform your game is targeting, the engine needs to recompile all code to a common intermediate layer that the target platform supports. This will result in multiple transfer times, which will increase the compilation time dramatically. For example, if you build an iOS App in Unity, Unity will transform its code to C++, which will be included in XCode and then recompiled into the iOS application code, which can take 2 hours or more. Even if you use “preconfigured systems”, this intermediate step makes faster development iterations impossible.

Game engines have huge release packages

Games love to consume your hardware storage. Taking up >100GB or more (Call of Duty: Modern Warfare = 200GB+) is not unusual, and I guess all of us know the good old CD game collections where you had 3+ CDs just to install the game. Creating an archive for the 5 latest builds for only one platform can consume so much space on your storage drive that no one wants to pay for that. Contained assets, engine libraries, and additional downloadable assets (AssetBundles) combined with multiple version and platform support are a nightmare for any storage endpoint.

Game engine distribution platforms require various authentication methods

Games and mobile applications pose the same challenges when it comes to delivering their packages to the corresponding test/release channels. You need to get access to each individual platform’s store, pass credentials, and authentication levels to upload your build there. If you have a game which needs to be delivered to Steam, WebGL, Google, iTunes, Standalone FTP, etc., the credential management gets insane.

Game engines cannot benefit from CI jobs

It is always a good idea to allow only “working” code to be pushed to the servers. Therefore, systems like Git CI allow you to create jobs which sanitise your code and test its correctness. In game development, you only have your huge engine footprint which no CI system can easily use, making “fast iterations” nearly impossible. Getting a build for every developer commit can be useless when the build needs more than an hour to get verified or built. No one wants to wait minutes or hours to get a confirmation for any changes made. Using these jobs in game development will slow down the process and create chaos within your workflows.

Congratulations! If you have made it that far through this article, I hope you now have a strong urge to set up a build pipeline for your project as well. But, sorry to say, part 1 of our examination is over... In part 2, we will discuss the build pipeline options available for game developers and some other things you should know about setting up your build pipeline, so stay tuned!

Olaf Spiewok

Senior Developer

Olaf has been part of Endava for more than 13 years, with his focus being on game development. As a universal game development whizz, he is comfortable working with different platforms, like standalone systems, Nintendo DS, television, or mobile, and frameworks, among them Unity, React, Cocos, Android, and iOS – and the associated variety of programming language skills. Olaf is proficient in a diverse range of game development areas, including UI, workflow, gameplay, integration, and maintenance. Besides his development work, Olaf spends his free time with his family and dog, playing games – digital and analogue, indoors and outdoors – as well as learning to understand the Berlin dialect.


From This Author

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2



  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture