<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
RSS Feed

Distributed Agile | Thomas Behrens |
05 January 2021


This is a test. Although agility and distribution are seen by some as contradictory, the number of distributed and agile software development projects continues to increase constantly. In the COVID-19 era we are now faced with teams whose members are all in different locations. The discovery of requirements is particularly affected by this: Since the refinement of the requirements in agile projects is based on a communication promise made through a user story, the geographical distribution cuts right through this communication flow. This raises the question of how this gap can be closed to ensure the team creates the right product.

There are a few mitigations we have gained experience with. I shared in previous articles how using “Domain Knowledge as a Catalyst” and “Favouring Skills over Roles” can help to address this challenge. The third and last part, “Communication Promises, yes, but…!”, explores the appropriate use of established business analysis tools to improve an agile remote working environment – even across hundreds of home offices.


Since the introduction of the Business Analysis practice in 2013, I have been looking after our business analysts at Endava. Often, they play the role of agile business analysts supporting our client Product Owner (PO). In most of our engagements, we interact with a PO who is not co-located with the team. In the previous part of this series, I showed how deviating from a strict role model helps to improve a successful interaction at this boundary. In this part, I will focus on the tools and techniques of the business analysis practice.

These tools and techniques are defined by a number of industry bodies for our profession. The International Institute of Business Analysis (IIBA), which Endava is a member of, defines knowledge areas into which these practices are grouped, such as “Strategy Analysis” or “Elicitation or Collaboration”. Depending on the context a business analyst works in, the emphasis on the different practices can vary significantly. Someone who creates user stories as part of an elicitation exercise is doing business analysis as much as someone who works out what the next payment product is to surprise the market with.

How do we know what is required for our projects? Within our business analysis practice, we define a set of “Core Skills” (see previous article for further background) based on a sampling of our projects with business analysis engagements as well as market observations and industry trends. Having determined the set of tools, we need to further ensure we are using this subset of analysis tools and techniques in a way that supports our distributed agile delivery process.


A user story is not a requirement, but it is a communication promise. It is a promise to have a dialogue whenever it is needed. Keeping a verbal communication promise is harder in a distributed working context. Virtually walking to your Product Owner’s desk is harder. Talking about tasks without a whiteboard is harder. Engaging with other team members is harder. We all know it is not impossible, but it requires more effort.

We continuously address some of these challenges by improving our workflows. Such changes can start very simply, for example by announcing your availability or indicating when you are busy in the collaboration tool you use, or by using virtual whiteboard tools effectively. But a key factor is to avoid meetings for verbal explanation when they may not be necessary. How? “Complement the communication promise!


First things first, we do not want to avoid verbal communication in general. Such an approach would be doomed. To make the most of verbal communication, communication promises can be enhanced with lightweight, temporary documentation. So, when a communication promise is made, ask some forward-looking questions. The responses can be captured with appropriate analysis tools like an entity model, the definition of the context, or by recording some state model for key business concepts.

Does this mean going back to the “good old days” of up-front documentation? Surely not. Let us consider a photography analogy: by moving from analogue to digital photography, we moved from the darkroom to “Lightroom”. We still use techniques like depth of field to emphasise a specific object, though nowadays we have other options to achieve this, like using blurring in your favourite photo editor rather than the aperture of your camera. In the same way, I can use a lightweight domain model to ensure we capture key concepts without having to create a formal model; some years back we might have attempted to solve this with a model-to-code transformation.

Let us not throw out all the established tools in our field but rather use them appropriately for the individual context. The usage of these tools is not black or white; the art – or better, experience – is to find the right level of detail and formalisation in the use of tools and techniques. The following table shows three examples to illustrate the two ends of the scale.


“Light” – less detailed

“Formal” – more detailed


Identification of Domain Concepts and brief description of the most important ones; optionally, key attributes which help people understand the concepts

Detailed description of all Domain Concepts, including attributes, operations, and value range; description of the relationships; definition of synonyms; graphical representation as class or collaboration diagrams

Scenarios / Story Maps

Rough structure referring to the Domain Model; descriptions based on cards, keywords and/or bullet point lists

Granular structure; consistent use of (visible) references to the Domain Concepts and their attributes; reference of states and transitions back into the scenarios; use of standardised formatting or even modelling tools

Life Cycles

Identification of a few key states of central Domain Concepts; possibly simple state diagrams mainly used for analysis purposes that are not preserved

State diagrams with (possibly) hierarchically structured states for all important Domain Concepts, including descriptions for the states; transitions being referenced back to use cases or parts of use cases (e.g. alternative flows).

Table 1 – Level of detail and formalisation

We harvest this information and guidance in our own delivery model TEAM (“The Endava Adaptive Model”). In the specific case of agile business analysis tools, we went one step further.

Why re-invent the wheel? There is a popular approach in agile software product development established by Ellen Gottesdiener called “Discover to Deliver” (D2D). We worked with Ellen and integrated this approach into TEAM. The D2D approach provides the essential planning and analysis practices you need to collaboratively deliver high-value products.

D2D uses “7 Product Dimensions” and “Structured Conversations” to understand the product options you have from different angles and understand the value they create. We use this highly visual framework to decide on the appropriate analysis tools we utilise for the dialogue within the team or with the client’s PO. It also helps to decide where to create documentation to provide context, for example, on the small scale for a user story satisfying the “Definition of Ready” or on the large scale for the entire backlog to remain healthy.


By appropriately using existing core business analysis tools, we can effectively support a distributed agile delivery model. This approach complements the agile communication promise effectively and keeps parts of the communication promise through temporary, but relevant, written communication. We use our TEAM delivery model to harvest this experience and share it amongst our teams. This allows our teams to improve their communication and better connect as individuals, even within very widely distributed teams. Thus, it is another building block in closing the gap between the client’s product owner and the team.

Thomas Behrens

Group Head of Analysis

Thomas is Group Head of Analysis for Endava. He has over 25 years of experience in software development which he has gained in various sectors including investment banking, telecommunications, mobile payments and embedded systems. He is focused on setting up distributed agile software development teams and shaping the business analyst force at Endava. When he isn’t doing that, Thomas can be found delivering training or speaking at conferences.


From This Author

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 09 July 2019

    Developing your Product Owner mindset

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team



  • 13 November 2023

    Delving Deeper Into Generative AI: Unlocking Benefits and Opportunities

  • 07 November 2023

    Retrieval Augmented Generation: Combining LLMs, Task-chaining and Vector Databases

  • 19 September 2023

    The Rise of Vector Databases

  • 27 July 2023

    Large Language Models Automating the Enterprise – Part 2

  • 20 July 2023

    Large Language Models Automating the Enterprise – Part 1

  • 11 July 2023

    Boost Your Game’s Success with Tools – Part 2

  • 04 July 2023

    Boost Your Game’s Success with Tools – Part 1

  • 01 June 2023

    Challenges for Adopting AI Systems in Software Development

  • 07 March 2023

    Will AI Transform Even The Most Creative Professions?

  • 14 February 2023

    Generative AI: Technology of Tomorrow, Today

  • 25 January 2023

    The Joy and Challenge of being a Video Game Tester

  • 14 November 2022

    Can Software Really Be Green

  • 26 July 2022

    Is Data Mesh Going to Replace Centralised Repositories?

  • 09 June 2022

    A Spatial Analysis of the Covid-19 Infection and Its Determinants

  • 17 May 2022

    An R&D Project on AI in 3D Asset Creation for Games

  • 07 February 2022

    Using Two Cloud Vendors Side by Side – a Survey of Cost and Effort

  • 25 January 2022

    Scalable Microservices Architecture with .NET Made Easy – a Tutorial

  • 04 January 2022

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 2

  • 23 November 2021

    How User Experience Design is Increasing ROI

  • 16 November 2021

    Create Production-Ready, Automated Deliverables Using a Build Pipeline for Games – Part 1

  • 19 October 2021

    A Basic Setup for Mass-Testing a Multiplayer Online Board Game

  • 24 August 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 3

  • 20 July 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 2

  • 29 June 2021

    EHR to HL7 FHIR Integration: The Software Developer’s Guide – Part 1

  • 08 June 2021

    Elasticsearch and Apache Lucene: Fundamentals Behind the Relevance Score

  • 27 May 2021

    Endava at NASA’s 2020 Space Apps Challenge

  • 27 January 2021

    Following the Patterns – The Rise of Neo4j and Graph Databases

  • 12 January 2021

    Data is Everything

  • 05 January 2021

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 3

  • 02 December 2020

    8 Tips for Sharing Technical Knowledge – Part 2

  • 12 November 2020

    8 Tips for Sharing Technical Knowledge – Part 1

  • 30 October 2020

    API Management

  • 22 September 2020

    Distributed Agile – Closing the Gap Between the Product Owner and the Team – Part 2

  • 25 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 2

  • 18 August 2020

    Cloud Maturity Level: IaaS vs PaaS and SaaS – Part 1

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed safe PI planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML deployment in the wild – a story about friendly animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 January 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 05 November 2019

    The Twisted Concept of Securing Kubernetes Clusters

  • 01 October 2019

    Cognitive Computing Using Cloud-Based Resources II

  • 17 September 2019

    Cognitive Computing Using Cloud-Based Resources

  • 03 September 2019

    Creating A Visual Culture

  • 20 August 2019

    Extracting Data from Images in Presentations

  • 06 August 2019

    Evaluating the current testing trends

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 12 July 2019

    The Rising Cost of Poor Software Security

  • 09 July 2019

    Developing your Product Owner mindset

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 30 May 2019

    Microservices and Serverless Computing

  • 14 May 2019

    Edge Services

  • 30 April 2019

    Kubernetes Design Principles Part 1

  • 09 April 2019

    Keeping Up With The Norm In An Era Of Software Defined Everything

  • 25 February 2019

    Infrastructure as Code with Terraform

  • 11 February 2019

    Distributed Agile – Closing the Gap Between the Product Owner and the Team

  • 28 January 2019

    Internet Scale Architecture