Rethinking CTRM – Part Four

Read Part Three

In part 4, we will look at releasing the software into a production environment and keeping it current. We have, in the previous blogs in this series, described the architecture and methodologies that can help us to build systems that enable strong revenue growth and capture the data that is important to the business. However, this is meaningless if the software cannot be released into a production environment in good time.

Commodity trading is a dynamic, fast-moving and often changing business that demand agility from business process and systems in order to thrive. This means that the CTRM and related software must change if and when the business processes change. The modelling techniques discussed in the earlier blogs in this series enable us to efficiently change the software when the domain changes. To ensure that the ability to release software is not a bottleneck, Adaptive employs DevOps practices and applies the principles of continuous delivery – but what is it and how does it work?

Continuous delivery

At its heart, agile software development relies on regularly iterating and improving through short feedback cycles. Continuous delivery principles extend this to say that each completed piece of work should be fit to be released to production. That is not to say that all changes are immediately released, but they must be able to be if needed. To do this, we build a pipeline that all code must pass through before being released. If it fails at any stage, it progresses no further and must be fixed.

The more automation we have in that pipeline, the smoother we can make the process and the more regularly we can release. Often, there are also some manual steps present in amongst the automation; manual testing before releasing or a code review before integrating the code to the main branch, for example.

While it may seem counter-intuitive to release new functionality to production with little or no manual testing, the reason releases in many companies are ‘scary’ is because they are not practiced often enough and contain many changes. Releasing more frequently ensures that we practice the process regularly and it means each release is small enough to be well understood and be rolled back or corrected if issues are found. More regular releases are highly correlated with better software delivery performance and research into software delivery has found that software developers can be categorized into high, medium and low performers. The high performers were found to be able to deploy on demand, multiple times per day in many cases, had lower lead times to release, lower mean time to recover, and lower failure rate of changes (Forsgren, et al., 2018).

This is not necessarily a causal relationship; simply releasing more often will not reduce your failure percentage. Even so, those organizations who have been able to successfully apply the principles of continuous delivery have reaped the benefits in terms of more regular and successful releases.

The high performers had a 70/30 ratio of new work to unplanned work and rework compared with a 58/42 ratio for the low performers. The same research found that software delivery performance translated into better organizational performance. The high-performing groups were twice as likely to exceed goals in profitability, market share, and productivity compared with companies in the low-performing group, those that released software less frequently than once per month.

Given the pace and number of changes that occur in a commodity business; many of which that are essential due to a regulatory or other reason, having this delivery agility can be a key differentiator.

Unfortunately, many of the commercial CTRM software vendors are forced to consider not just current customers but also its generic target market, when planning its forward delivery program. With limited resources, it is constantly juggling requirements, bug-fixes and needs – not just to keep its customers happy but also to ensure future sales. Often, this translates into large, complex releases delivered a small number of times a year. Testing is often inadequate as well meaning that the quality of the releases may be suspect – which, given the above statements, might be tolerable if the releases were regular and incremental, but these are usually massive releases delivered only periodically. Adaptive’s approach can we believe, add agility and competitiveness to an organization above and beyond that of a commercial package supplier.

Deployment automation and use of cloud resources

One of the steps in the pipeline above is automated deployment

It is important to disambiguate deployment and release. Deployment is the act of installing and configuring software artefacts on a server. Release is making these artefacts live and enabling users to connect to them. Software must be deployed before being released, but deployed software may never be released. It follows that deployment carries nearly zero risk. Techniques like blue-green deployment mean we can deploy to a production server at any time but delay the release until we are ready to move users to the new functionality. The release itself may consist of nothing more than changing DNS entries.

Cloud services make this model easier to achieve. Servers become fungible, simply built from images when needed. A production release can become as simple as deploying the new functionality onto hardware alongside the current live version, switching DNS to the new servers when we want to release, and then stopping the old servers once we are confident the release is successful. Modern containerization and container orchestration solutions such as Docker and Kubernetes respectively mean that we can perform the same process very quickly at a more finely grained level than a physical server or virtual machine.

All deployments and releases must go through the pipeline

If we have built a deployment pipeline for production releases, then we must use the same pipeline for releases to test environments. The artefacts that have been tested must be what is deployed and released to production. Again, cloud services make this simpler. We can provision test servers or containers of a suitable specification and perform the testing. Once testing is complete the same artefacts are easily deployed to production servers, ready for release.

If we build a deployment pipeline and automated tooling and don’t use it for releases, it will degrade over time.
The broken windows theory in criminology suggests that visible signs of crime encourage more, often serious, crime over time. If an area looks run down and neglected it is likely not to be heavily policed and a good target for further crime. Fixing broken windows contributes to the view that an area is well maintained.

A failing test in our deployment pipeline is analogous to a broken window. If it prevents us deploying, then it will be fixed quickly, and our environment stays well maintained. If we don’t use the pipeline, then the failing test has no consequences and remains unfixed. This is a sign that our environment is run down; soon more tests fail and eventually the pipeline becomes useless.

It takes significant effort to build and maintain a deployment pipeline like this, but the rewards are considerable. A team at HP applied techniques like those described here over the course of three years (Humble, et al., 2015). In that period, they went from having no automated tests to spending 23% of time writing and maintaining their tests. The payoff was that they reduced product support from 25% of time to 10%, manual testing from 15% to 5%, and increased innovative development from ~5% of time to ~40%.

Again, while many CTRM vendors talk about their solution being available in the cloud, actually very few are delivering multi-tenanted, cloud-native applications and so are unable to deliver on many of the advantages discussed above.


The seminal book on Domain Driven Design (Evans, 2003) is subtitled ‘​Tackling Complexity in the Heart of Software’​. The complexity here is not technical complexity but modelling the business domain.

Software projects that focus on technology to the detriment of modelling the business domain are doomed to failure. While the technology approach described here uses relatively novel techniques, they are a natural fit for describing business processes in software and we have seen great success by using them in both the financial and commodity industries. All but the smallest of companies is a software company to some degree and many of these companies follow complex business processes. We cannot remove business complexity with software, but where it exists, we can use the techniques described here to build software models that enable innovation and allow companies to gain a competitive advantage and maximize profitability.

In the fast-moving and complex world of commodity trading and risk, the ability to build and deploy solutions on a timely basis that meet the needs and requirements of the business while providing business agility, are paramount. In the past, this has not necessarily been the case with large, monolithic, highly customized solutions being partially deployed and used. This need not be the case now or into the future and Adaptive knows how to build. Deploy and support real-time solutions in a high-speed, constantly evolving world.

Read Part One

Read Part Two

Read Part Three

Read Part Four

Read Part One

Read Part Two

Read Part Three

Read Part Four

Matt Barrett Picture

Matt Barrett

CEO and co-founder,
Adaptive Financial Consulting


Contact us

    By pressing "Send" I agree that I am happy to be contacted by Adaptive Financial Consulting. I can unsubscribe at any time. You can read more about our privacy policy here.