Rethinking CTRM – Part Three

Read Part Two

In the previous two articles we looked at why a rethink around CTRM software is needed and then at the analysis and design side of things, outlining and proposing new approaches to dealing with highly complex businesses in the commodities industries. Next, we will take a look at physical architecture considerations.

Historically and perhaps by necessity, CTRM solutions have tended to be a bit monolithic. As the business has changed – often suddenly and dramatically, it becomes harder and harder, and costlier to keep these monolithic solutions up to date. With the advent and broader acceptance of the cloud in the commodities space, there has been a move toward the idea anyway of ecosystems of components in the cloud – perhaps even a mix and match of vendor-provided and custom-developed components knitted together with APIs providing greater flexibility and agility. Building systems as components that communicate via messages allows us to distribute the components as microservices much more easily than as a single monolithic model.

However, distributed systems do come with a cost of their own. Network failures, latency, and bandwidth all become much more significant concerns for communication between components. Additionally, we must monitor services to check the system is working as expected. When there are connection problems, it can be difficult to understand what has happened. Each project has unique requirements in this regard, though Adaptive has found it is often prudent to deploy components together initially and move to a distributed model only when it is needed.

User Interface Design

In the previous blog post, we discussed process versus data design and this difference in approach is also key when it comes to designing the User Interface as well in making decisions about how to store data. It is Adaptive’s view that a good domain model will continue to evolve and become ever closer to the SME’s mental model of the business. As that knowledge is gained and the model evolves, it should remain as independent of the UI as possible and a single component may not correspond to a single screen and vice-versa. The UI should not be designed around the data model for this and several other reasons including;

  • If we send the data from a screen to the model in one message, then it may need to go to multiple different components meaning that extra coordination is required to ensure that all components have processed the message from the UI. In practice, this is not feasible for every interaction between the UI and model and so the domain model eventually becomes constrained by the UI design and becomes inflexible,
  • Commands represent a single action performed by a user. As such, they are generally a lot more granular than the data captured in a whole screen. This allows us to direct a command to the appropriate component for processing. As the model evolves, we may send a command to a different component, but as it represents a single part of a business process it is unlikely to be processed by multiple components.

The alternative approach is that if we have modelled the business processes as sequences of actions and events, then task-based UIscan be built to allow users to perform these actions. It is not good enough to simply construct forms for data entry. Different business processes may involve the same data capture, but we want to differentiate between them and record the user’s intent.

Task-based UIs are modelled around the tasks that users perform - the business processes - rather than simply being data entry forms. We can break down the data we capture into multiple commands that represent the intent of the user. Each command is applied to the model immediately and we can use that information to provide more context about the current task the user is performing.

Using Different Data Stores

Once we have developed task-based user interfaces, it becomes apparent that the data captured in commands is different to the data that is queried for the user to make decisions. For example, while the information needed to book a trade may consist of a few simple attributes, the information needed to actually make the trade decision is significantly different. We likely need to show current prices, PnL, and other things as well. This means that the data stored from commands does not actually have to be in the same physical database as the data that is queried to show results on screen, and it is a well-known issue with CTRM solutions with an architecture centered around a single relational database, that optimizing for both reads and writes is a major issue. By separating the two stores of data, it allows us to optimize them independently, based on use cases.

The data store for actions, the write store, does not have any queries performed against it. All it must do is load its current state and as we are already using events for communication between components, one option is to simply store these events in a log.

Initial State + Event => New State

To load the current state, we simply accumulate the events that have occurred. The current balance of a bank account is the sum of all debits and credits that have been made to the account. In some cases, we may take snapshots of the current state rather than rebuilding from the underlying events. This is done if many events need to be processed and performance may be a concern. The read store can use the same events and project them into an alternative view of the data that is more suitable for querying. This is never changed directly by user actions, only by the events that have been accepted and processed by the domain model.

Another common problem is the need for data in many different formats. For example, we might use a document database to provide information for transactional screens, but also want an OLAP database for reporting queries. Well, once we have separated the data store into a read store and a write store, then it’s not much of a leap to imagine that we can actually have multiple read stores for these different purposes and that such an approach may be more optimal. The events are published to all read stores, so our reporting database is kept up to date at the same time as our transactional database. With this approach we can have near real-time data in multiple databases, each suited for the different queries we want to perform. Similarly, if we need to scale out multiple instances of the same type of read store, we can do this in the same way.

How This Approach Benefits the Business

If we store our data as a sequence of events, it can have huge business value.  As we have modelled business processes, the events we capture describe what has happened in the business. We can use the events to provide alternative views of the data for business intelligence purposes.

Example: Warehouse operations

The current inventory in a warehouse is the accumulation of all the goods that have moved in and out.

We can calculate the current inventory balances from events as goods are moved, but this is not the  primary reason for storing the events. If we have the history of movements, we can gather lots of other data too. We can see how many times we have run out of a particular product, what the average utilization of the warehouse is, which goods have been in the warehouse the longest, and much more besides. We can extend the techniques we use to build multiple read stores from events to construct ad hoc reports. The events that we capture now can be used to build reports in the future. We cannot predict the future needs of the business. Nevertheless, by storing what has happened we give ourselves the opportunity to build unknown future reports using this information.

Causal relationships

Another thing that we can do to id to add various metadata to the events to provide additional information regarding how they are linked together. For example:

  • Correlation IDs that let us see when a single action causes multiple events that are correlated together
  • Causation IDs that tell us what caused a particular event to be emitted.

Doing this means that, as well as knowing what has happened in our business, we can start to understand ​why​ things have happened. We may end up in the same state from multiple routes but by being able to trace back from that state through the events that led to it allows us to make decisions based on causal factors.


In fact, most CTRM solutions – both commercial and custom developed – are based on data driven design making them inflexible in the longer-term as well as much more complex and difficult to maintain through time. Additionally, many utilize just a one size fits all relational database to store data and screens that reflect the data model. Not only is this approach suboptimal but it actually doesn’t reflect the way the business runs nor provide the ability to gain added business benefit from the solution. Adaptive’s innovative approach, based upon years of successfully delivering critical real-time systems across the financial services industry, allows users to extract significant value by using business process driven design and optimal physical implementations.

Read Part One

Read Part Two

Read Part Three

Read Part Four


Matt Barrett Picture

Matt Barrett

CEO and co-founder,
Adaptive Financial Consulting


Contact us

    By pressing "Send" I agree that I am happy to be contacted by Adaptive Financial Consulting. I can unsubscribe at any time. You can read more about our privacy policy here.