<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
 
RSS Feed

Software Engineering | Olaf Spiewok |
04 January 2022

In part 1 of this article, we considered the general concept of a build pipeline and its advantages, and we learned why build pipeline solutions that are standards in other development disciplines don’t work as well in game development.

Now, let’s take look at how game developers can use a build pipeline in their projects.

WHAT BUILD PIPELINE OPTIONS DOES A GAME DEVELOPER HAVE?

There are many different tools and solutions to help game developers with their builds. Let’s take a look at some of the options I have experience with. Of course, other options could be mentioned as well, like Plastic SCM, but for this article I will stick to the ones I have used or am currently using.

CD server systems (Jenkins, TeamCity, etc.)

Once installed and configured, these types of CI/CD systems are in my opinion the best and “easiest” solution in game development. The hardware is in your own hands and can be adapted, updated, and maintained accordingly. You can check for errors and analyse them directly on the system, which – with the appropriate number of nodes – can scale well. However, the maintenance effort and the necessary adaptation in case of system/engine updates needs to be emphasised as well, as this is a recurring effort that should not be underestimated. Apart from that, such systems can easily be transferred to and re-used for other projects.

Engine build Systems (Unity Cloud Build, Lumberyard Waf Build, etc.)

Many engine vendors provide their own build pipeline solution for their engine. Unfortunately, this is often associated with extra costs, so that many refrain from using these solutions for indie or small projects. Furthermore, the in-house solutions are usually poorly extensible or designed only for certain target platforms. So, for example, if you want to address the store after building the package or extend the build process with additional pre- or post-steps, this is simply not possible. These solutions do not offer much more than the build process and the creation of the package itself.

Custom solutions with remote server nodes

Most of the time, using purchased nodes as build nodes is a very good alternative which can be scaled perfectly. You can simply build your setup on an externally hosted system and upgrade/downgrade the nodes as needed. Unfortunately, there are always three big problems with this solution:

  • Running costs demanded by the provider for the top-tier computers
  • Rather mediocre network traffic capability of the nodes, which can only load the data with difficulty due to access and provider restrictions
  • Internal security concerns since the data and, if applicable, credentials are openly stored on the external systems


Despite these disadvantages, custom solutions are useful if you maintain them properly and if they are not too specific.

CI solutions based on OS virtualisation (Bitrise, Git CI, Docker, etc.)

Bitrise and Git CI create completely new nodes to start the build process, on which you have to install your engine, its license, and additional platform-specific tools. On top of the actual build process, which can take 20 minutes and longer, the setup gets added with every build job, which can get annoying pretty quickly.

Bitrise has been trying to find solutions together with Unity since 2017, but they are still struggling with it. Unreal also doesn’t leave a good impression in this regard due to its C++ framework, as build times quickly explode.

Docker can be useful in certain cases but gets very out of hand when serving multiple platforms. The many Docker images for the various platforms and the different engine version images make a gigantic image fleet necessary.

EVERYTHING GOOD ALSO HAS ITS DOWNSIDES

Unfortunately, creating a good build pipeline also has some negative points that can put a damper on the initial euphoria.

A build pipeline requires maintenance

Besides the time needed for the initial setup for all platforms, with each new engine version, plug-in, or system update, some time is needed for maintenance. Depending on the number of platforms and nodes involved, this can take several days or even weeks. Finding and fixing bugs is usually very difficult with a dedicated system, and it ends up taking several days to reconcile the issue logs with Stack Overflow.

Maintenance can be time-consuming and unexpected, but it’s only necessary for one setup, so it is still worth the effort considering the benefits. Just imagine having to maintain all individual build variants for each developer or tester – suddenly, maintaining your build pipeline doesn’t look that bad, right?

A build pipeline requires additional internal costs 

Depending on the build pipeline system and engine, additional costs may include:

  • Provisioning costs for 1–x nodes (Mac/Windows/Linux machine)
  • License costs for the respective nodes
  • Monthly costs for the CD platform itself
  • Maintenance and operating costs for your own nodes


These additional expenses may pay off for large projects or companies with many application areas but might be unnecessary overhead for indie or one-off projects.

Looking at all these potential costs may lead you to want to skip build pipelines for small-scope projects. However, having a developer do the work necessary without a build pipeline multiple times a week will probably amount to similar costs.

A build pipeline can raise security issues

In a company where the data and credentials are highly sensitive, an external build pipeline can be ruled out from the start. To make it worse, having a shared account within a build account might not be in compliance with the company’s code of conduct. Those obstacles can reduce a fully automated system to its minimum, where everything has become safe but hard to maintain, for example, having internal-level access only or having to build up non-standardised, highly customised self-made systems.

The issue of security might result in hard discussions between your IT security team and the developers – but you should not bypass or ignore those discussions! They allow you to make your workflow safe and compliant within your environment.

Manual steps can be required

Some plug-ins or tools might require manual steps which you cannot integrate in an automated build pipeline. Steam, for example, requires you to enter a 5-digit code as soon as you want to upload a build from a new environment. Unity wants to configure some internal packages before you can actually use them, which implies that you have to “open” Unity at least once on the system you are using. Solving these problems with a build pipeline can be very frustrating – and might even be impossible in some cases.

The good thing is that at this point, you will know where such issues might appear and which may have to be solved manually.

SUMMARY

Setting up a build pipeline always makes sense in the long run. Since you can easily use the pipeline over several projects, the initial effort is minimised with each additional project and a general standard can be established. As soon as the very first code of a project exists, you should set up a build pipeline.

Even if you spend some time setting up and maintaining your build pipeline, the overall time saving still is significant. For example, if I were to individually build all variations for one of our large game projects, I would have to spend around 8 hours! Having multiple nodes in our Jenkins pipeline shrinks the time spent on these tasks to around 2 hours, including building and uploading.

Large game studios have specialised build pipeline teams who take care of its integrity and optimisation throughout the production to save as much time as possible during development and to be able to deliver versions as quickly as possible. These teams can help game developers to focus on their features and streamline the delivery process.

Game engine providers are trying to accommodate their use in the build pipeline more and more, but they are still behind the demand compared to the solutions available in Node.js or native app development, for example.

So, even though setting up a build pipeline requires some effort and the solutions offered by the game engine providers are not ideal yet, you will notice the benefits after the third build, at the latest. In addition, you get this cosy feeling that someone “professional and confident” created these builds.

My rule of thumb therefore is: as soon as you have made or have more than 3 releases in front of you – or have to serve more than one platform – set up a build pipeline!

Olaf Spiewok

Senior Developer

Olaf has been part of Endava for more than 13 years, with his focus being on game development. As a universal game development whizz, he is comfortable working with different platforms, like standalone systems, Nintendo DS, television, or mobile, and frameworks, among them Unity, React, Cocos, Android, and iOS – and the associated variety of programming language skills. Olaf is proficient in a diverse range of game development areas, including UI, workflow, gameplay, integration, and maintenance. Besides his development work, Olaf spends his free time with his family and dog, playing games – digital and analogue, indoors and outdoors – as well as learning to understand the Berlin dialect.

 

From This Author

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

 

Archive

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture

OLDER POSTS