Rethinking CTRM – Part Two

New Approaches to Design

Commodity Trading and Risk Management (CTRM) and Commodity Management (CM) software is very complex because the business is itself extremely complex, often non-standard and constantly evolving.  In order to deal with this, the software products devised and marketed by the vendors are usually extremely configurable by design. By making their products configurable, the vendor can reach a larger group of customers all with differing requirements across the industry and hence, extend the potential market for the software product, ensuring that it can be a financial and commercial success – as it has to be to be considered a true software product as opposed to a custom solution. One problem associated with this approach historically is that it takes time for the vendor and its customer to develop a shared understanding of the business and the software and how best to fit one to the other in such a way as to ensure value is achieved and the software can grow with the business. Implementing the software can then prove very difficult and risky as decisions can be made early in a project based on this lack of understanding that will eventually hamper the project resulting in delay, cost and often, a suboptimal implementation.

The same issues can confront a custom development project. Getting the project underway requires a good understanding of the business and this has to be somehow communicated between users and developers. A traditional approach involves developing a comprehensive design prior to commencing development. We have found that a short knowledge gathering phase at the start of a project – whether a custom development of a vendor implementation, is an excellent way to build this understanding.

EventStorming for Shared Understanding

A technique that Adaptive has used successfully to achieve this common understanding is EventStorming. This allows modelling of the business processes using sticky notes placed on a wall and a team comprising as many project stakeholders as possible providing this input. Each stakeholder adds notes that represent things that happen within their own areas of expertise and the notes are all placed on the same wall so that we can combine the knowledge of multiple people and form a timeline of the overall business processes.

Having this ‘common canvas’ allows Adaptive to build a shared understanding of how things interrelate and, while it may seem expensive to involve multiple stakeholders in a workshop, we have found that the level of understanding gained through this approach generates payback in the form of both quicker releases to production and an earlier return on investment. The cost of building the wrong thing or implementing software incorrectly more than exceeds the cost of people’s time to ensure that the project results in building or implementing the right thing.

After an initial whole project kick-off session, EventStorming workshops should be held with smaller, more focused audiences to zero in on specific aspects of the project. It is essential that collaborative domain modelling continues to happen regularly throughout the project as even with powerful techniques like this, the initial model will still require refinement as development progresses.

While most traditional projects will commence with data modelling, Adaptive takes the view that modeling the business processes is the right way to and indeed, decisions regarding the physical implementation are postponed. In some instances, Adaptive has employed event driven databases to underpin CTRM solutions with great effect (more on this in a later blog).

In some cases, this domain modelling may actually reveal the presence of competing business processes. When this occurs, the output of the modelling exercise may not actually be software, but changes to the business process. We cannot implement nor develop contradictions, so we must reconcile the competing processes before attempting to create a model in software. But what is domain modelling anyway?

Domain Driven Development

Domain driven development is not a development methodology per se, but rather a way of performing analysis and design that allows the business domain to be modelled as it is understood by the people working in the business, often known as subject matter experts (SMEs), and then capture it in a software solution. It isn’t possible nor desirable to model the entirety of what the business does, but that doesn’t mean a model isn’t useful and so we should strive for maximum utility rather than trying to model every nuance of how the business works.

As stated above, the first iteration of any model will not ever be complete. It may model certain concepts in a naïve manner initially. If it can deliver some value, however then it is a candidate to be released to production and be iterated upon. The model must be malleable. As we discover new insights, we must be able to incorporate them into the model with the minimum of effort. So, while it is not a development methodology, DDD does have an impact on the architecture and development techniques used to build software. To apply DDD successfully, infrastructure layer concerns, such as dealing with databases or networks and application layer concerns, such as coordinating requests from the UI, must be separated from the domain model. The domain layer should only contain business logic so the model can evolve without developers having to be concerned with complexities external to the domain.

Building a domain model and isolating it from application and infrastructure concerns does not imply that you are practicing domain-driven design though. Left to their own devices, or with only written specifications to work from, a development team will never build a domain model that approaches the SME’s mental model. To achieve this, a high-level of communication is necessary and we feel that the Adaptive approach is both innovative and unique in this regard, utilizing processes like EventStorming.

Dealing with Complexity

In Commodity trading, a trade is usually a deal that is negotiated with a counterparty, an entity that affects risk and P&L for middle office, something that causes cash flows on the balance sheet for an accountant, and possibly other things too. In other words, different parts of the business deal with the trade in different ways and perform different actions in relation to the same trade. Multiple business processes are involved.

Adaptive’s approach to this is that rather than having a single trade model that satisfies all purposes and interacts with all the business processes of a trade, we define separate models, known as bounded contexts, that only interact with the processes they need to. We realize the boundaries through language.

Sometimes people with different roles describe things that superficially sound similar. Sometimes, people refer to the same concept but using different names. These differences in language help suggest boundaries between the models, which can then be clarified by conversations with SMEs. Within a bounded context, a single name must refer to a single concept in the model and vice-versa. This concept is known as ‘ubiquitous language’, a set of terms common to SMEs and developers that is used throughout the system, whether in code, documentation, or anywhere else.

It might initially appear that having multiple models will add both complexity and development time. However, In the medium to long term, it is precisely this separation that enables development speed to increase as time goes on. If we model the boundaries accurately, then small changes to business processes result in only small changes to the model in code. This results in a virtuous circle where the domain model can quickly be adapted to more closely match the business users’ internal model, so it is subsequently more adaptable to further changes. Development accelerates as the project matures and the domain model becomes richer and more accurate. Additionally, work can be divided between people and teams more easily as different teams can work on different models without stepping on each other’s toes. Development can be parallelized more easily, and value delivered sooner.

Commodity Trading business processes are event-driven

In businesses, people do things in response to other things that are happening, such as another person having performed an action, or a system making data available. This is one of the reasons EventStorming works so well for modelling business processes as it follows that systems designed to model these processes should be event-based. In practical terms, this means that individual components send messages to each other. To integrate these components together, we only need to consider the messages that they consume and send. The inner workings of each component remain black boxes to anything outside of them.

Eventual Model Consistency

One of the consequences of modelling a system in this way is that components communicating via messages are eventually consistent. By their nature, messages take a non-zero amount of time to propagate from one component to another. For many purposes, such as viewing data, this is normally adequate. What we see is never completely up to date as even in a traditional architecture there is latency between a request being received from a user, processed by the system, and displayed on screen. Adding some extra messaging will not generally make much difference. To mitigate potential issues, we ensure that we push updates through to the UI as they are processed. The UI doesn’t simply get a one-time view of the data. Instead, it subscribes to a stream of data. If any updates are processed after the user makes the initial request, then they are pushed to the UI in near real-time.

In some cases, notably where the system needs data to do processing, the data must be consistent. This is normally to protect invariants in the domain model, e.g. ensuring a counterparty doesn’t break their credit limit while placing orders.  When data is modified, we need to uphold any invariants. How this is done depends on the data store being used. It could be a transaction on a relational database or a compare and swap operation on a document database or event log.

Another approach that we have used with great success is to serialize incoming requests onto a queue. These are recorded in a log and applied in sequence to an in-memory representation of the domain model. This avoids the need for the compare and swap operation as the latest state of the model is always available in memory, and can be useful for high-performance, low-latency processing. More details on this approach can be found in an Adaptive white paper (Deheurles, 2017).

What is stored in each component, the consistency boundary, is determined simply by what must be consistent. What are the invariants we are trying to protect? The consistency boundary may be small or large depending on the requirements. For an order management component that matches buys and sells together, it may be the entire order book.

Summary

Developing common understanding is key to a project’s success and Adaptive have considerable expertise using the methods above in achieving success with hugely complex, real-time systems. Design approach is key to building a model of the business processes that can be turned into software that is flexible and will grow with the business. Part three of these blogs will drive deeper into our innovative approach.

 
 
 

Author

Matt Barrett
Co-founder and Chief Executive Officer