Visa’s acquisition of Plaid has set the stage for even more significant M&As to occur within the FinTech world during this decade. Of course, while M&As often facilitate rapid growth, it can also mean risks to development productivity and software scalability, as well as challenges to knowledge transference.
Addressing these risks in due diligence and post-merger integration is of paramount importance and requires in-depth software assessments. So, in order to make the right decisions about a software system, a full picture is required, including risks that are not directly visible in the source code.
FinTech firms would do well to utilise a process founded on risk assessment to deal with these issues without delay. In the last 20 years, I have created several award-winning innovative techniques for detecting ‘anti-patterns’ in the evolution of a project’s codebase, and I helped many clients to focus on risk assessment by applying these techniques. When doing so, it’s also important to understand the behaviours of the team who developed that codebase. I achieve this through an analysis of the information stored in version control systems (Git and SVN) evaluating who changed what, when and why.
Of course, there are code quality analysis tools that can provide some oversight, although not all programming languages are equally well-covered. I’ve found that by taking a technology-agnostic approach, a company can not only receive a holistic picture of a codebase that includes configuration and other files, but also can detect relevant quality issues for practically all programming languages.
As you might expect, it’s important to identify high risk and high effort tasks throughout the course of this process. By highlighting areas in the code that are higher risk or that attract more defects than others, companies can ultimately get a clearer idea of what might need work in their system and what doesn’t.
Five things to look out for include:
1. Tasks with abnormally large code impact: Particularly, this includes tasks that repeatedly require massive code changes scattered across the system, which need a higher cost of retesting.
2. Knowledge polarisation: This is namely in reference to cases where one developer is doing all the work on a particular code area, which means that if that developer is leaving the project, the cost of continuing the development of that code area will be significantly higher than normal.
3. Subtle cross-language dependencies: Specifically, look out for subtle cross-language dependencies which cannot be detected with code analysis tools because the dependency is not visible in the code.
4. Coordination issues: Make sure to be aware of code areas where multiple developers are heavily changing the same files concurrently, which in turn make the merges at the end of sprint a nightmare. Or, even worse, developers from different world regions (e.g. a team in India and another one in Europe) working heavily in parallel on the same code areas may lead to error-prone code.
5. Weak tests: Keep an eye out for code areas that are apparently well-covered by unit tests, while the rate of recent bug-fixing changes remains high. This is an indication that in spite of the good coverage the code is still error-prone.
Each of the five aforementioned risks cannot be detected by using only traditional code analysis tools, because these are simply not visible in code. However, they can be detected if we use an approach that integrates information from all available data sources: Source code, source control systems, and task management systems.
Essentially, it’s necessary to seek an integrated, balanced, and holistic view of the risks in your system and assess its overall quality, without sacrificing any available information. And as new and exciting M&As linger on the horizon, those companies planning on being a part of them need to make risk assessment of software systems a priority and implement a cutting-edge software analysis tool to ensure the evaluation is done right.