Using Domain Driven Design To Build Enterprise Systems

Introduction

In a modern business, it is almost a truism to say that technology cannot be treated as a cost centre. Studies have shown that companies that invest in technology to gain a competitive advantage and enable innovation are widely more successful than those that do not. One paper found that investment in technology increased profitability significantly more than advertising and R&D, and that technology investments have a stronger effect on profitability through revenue growth than through reducing operating costs (Mithas, et al., 2012).

Additionally, data and the analyses performed on that data are fast becoming assets that are at least as valuable as the systems that capture the data. 78% of larger employers agreed data collection and analysis have the potential to completely change the way they do business (Ovenden, 2018).

How, then, do you build systems that capture relevant data about the business which are also able to adapt rapidly in response to a changing environment? When innovation is required, technology must not be the bottleneck.

At Adaptive, we have found that combining domain driven design techniques with compelling user experiences and strong DevOps practices allows us to deliver value as soon as possible and continue iterating quickly to satisfy our clients’ needs in the short and long terms.

This document explains how we apply these practices to design and build large scale enterprise systems and the benefits they bring, in particular:

  • How to decide when these techniques are applicable
  • The importance of communication and a collaborative environment to deliver business value
  • Modelling techniques to rapidly gain understanding of a business domain
  • How messaging enables separation of components and rapid development
  • Why domain modelling affects the UI as much as the server
  • How we can provide different models to query data in different ways
  • Why DevOps practices and cloud deployment add value

When should we apply these techniques?

It is important to be aware of when and when not to use the techniques in this document. Not every system warrants the investment in analysis and development prescribed here. The systems that do,those that sit at the heart of a company’s core business processes, need to be identified effectively.

In many cases there are obvious core systems that will benefit from these techniques. Where there is uncertainty we can turn to more sophisticated methods to aid us in making the decision.

Mapping the company landscape

One methodology that we have successfully used to do this is Value Chain Mapping, also known as Wardley Mapping (Wardley, 2016). We construct a two-dimensional map showing where components exist on a value-chain axis and a life cycle axis.

The y-axis shows the value chain. Components that are higher on this axis are more visible to users. The x-axis shows the life cycle. Components further to the right are more highly commodified.

In this example, an HR system is classified as a product. A financial services organisation will likely not be innovating in the field of HR, so an off-the-shelf system is more suitable than building a custom piece of software in this case.
A pricing engine, on the other hand, sits at the heart of innovation and pricing models are key differentiators for the business. Even though this may be further from end users than the HR system, it is closer to the start of the evolution life-cycle.

The genesis stage of evolution represents ideas that are continually being iterated upon; those that would almost certainly warrant bespoke development. In general, the systems in the top-left section of the map are those that would benefit from the techniques described here.

For components in this section of the map, commercial off-the-shelf systems are not a good fit. As business processes change the system remains static. Buying an off-the-shelf system with the intent to customise it is often a costly mistake. If you can make any modifications, they will likely be limited in scope, and any changes prevent easy upgrades to future versions. If the vendor makes modifications on your behalf, they often form part of the next commercial version, and you lose any competitive advantage and have no ownership of the IP.

As time moves on, components generally move further to the right on the diagram. What was once novel becomes more mainstream and eventually turns into a commodity.

For organisations to retain their competitive advantage, they must continue to innovate and introduce new systems and processes.

These systems must be adaptable to rapid change as the business processes evolve. To achieve that, we apply domain-driven design principles.

Introducing domain-driven design (DDD)

What is DDD?

It is not a development methodology, rather a way of performing analysis and design that allows you to model the business domain as it is understood by the people working in the business, often known as subject matter experts (SMEs), and capture it in software.

“All models are wrong, but some are useful”
George Box, Statistician.

While the quote above references statistical models, it applies equally to software models.
We can never model the entirety of what the business does, but that doesn’t mean a model isn’tuseful. We should strive for maximum utility rather than trying to model every nuance of how the business works.
The first iteration of any model is never complete. It may model certain concepts in a naïve way. If it can deliver some value, however then it is a candidate to be released to production and be iterated upon.

The model must be malleable. As we discover new insights, we must be able to incorporate them into the model with the minimum of effort.

While it is not a development methodology, DDD does have an impact on the architecture and development techniques used to build software. To apply DDD successfully, infrastructure layer concerns, such as dealing with databases or networks and application layer concerns, such as coordinating requests from the UI, must be separated from the domain model. The domain layer should only contain business logic so the model can evolve without developers having to be concerned with complexities external to the domain.

Architecture of a domain model

Hexagonal architecture (Cockburn, 2005), also known as Ports and Adapters, is an excellent fit for building domain models of this sort.

The domain model defines ports, which are implemented as interfaces in code. The concrete implementations of the interfaces live outside of the domain model, in the application layer. These are theadapters that ‘plug-in’ to the domain model’s ports.

This allows the domain model to remain focussed on domain logic, while still being able to communicate to the outer layers via its ports.

Building a domain model and isolating it from application and infrastructure concerns does not imply that you are practicing domain-driven design though. Left to their own devices, or with only written specifications to work from, a development team will never build a domain model that approachesthe SME’s mental model.

To achieve this, high-bandwidth communication is necessary.

Communication and language are key

“It’s developers’ misunderstandings that get released to production”
Alberto Brandolini, DDD practitioner.

If we wish to build a useful model of the domain, then everyone involved must have a shared understanding of it. The more steps it takes to get from the mind of an SME to working software, the higher the chance of something being misunderstood.

A bad domain model may still pass functional tests, at least initially. Deficiencies become apparent when modifications are required however. Subsequent changes become slower. Large estimates are received for changes that seem ostensibly trivial. As time progresses, development can almost grind to a halt. A poor model is an insidious problem that may not be noticed until it becomes prohibitively expensive to fix.

In our experience, the shorter the path between the SMEs who understand the model and the developers who build the model, the more successfully the techniques described here can be applied, and the more adaptable the model is. Both parties should be talking to each other on a regular basis. Ideally, SMEs should be embedded within the project team itself.

In successful teams, the lines between business and technology often become blurred. There is simply a single team delivering a common business goal.

Modelling techniques

At the start of any non-trivial project, the most critical task is to develop shared understanding. The sooner everyone understands the details of the model, the sooner value can be delivered.
We have found that a short knowledge gathering phase at the start of a project is an excellent way to build this understanding.

One technique we have used to do this effectively is EventStorming. As well as sharing knowledge, it produces artefacts that can directly assist with the design and development of the domain model.

EventStorming involves modelling business processes via sticky notes on a wall. As many project stakeholders as possible should be involved in this modelling. Each stakeholder adds notes that represent things that happen within their area of expertise. These are all placed on the same wall so that we can combine the knowledge of multiple people and form a timeline of the overall business process.

Having a common canvas allows us to build a shared understanding of how things interrelate.
While it may be expensive to involve multiple stakeholders in a workshop, we have found that the understanding gained pays off in quicker releases to production and earlier return on investment. The cost of building the wrong thing outweighs the cost of people’s time to ensure we build the right thing.

After an initial whole project kick-off session, more focussed EventStorming workshops can be held with smaller audiences. It is essential that collaborative domain modelling happens regularly throughout the project. Even with powerful techniques like this, the initial model will need refinement as development progresses.

In some cases, domain modelling reveals competing business processes. In these instances, the output of the modelling may not be software, but business change. A business change programme can work alongside a software model, but we should not rely on driving the change by writing software as we will not be able to realise the mental model of the SMEs. We can not implement contradictions, so we must reconcile the competing processes before attempting to create a model in software.

Modelling processes rather than data

Many traditional architectures are data-centric. A conventional project using a relational database often starts by building a data model of related entities and the attributes of those entities.

With DDD we focus on modelling the actions that can be performed rather than the data that is captured. Each action necessarily captures some data, but the driver is the action.

Data capture alone is usually the result of multiple steps in a business process. SMEs think in terms of the process they follow rather than just the data that the system captures. If we wish to model the system according to their mental model, then the system must model the business processes rather than only the data.

People with different jobs follow different processes

In financial systems, a trade is a deal that is negotiated with a counterparty for a trader, an entity that affects risk and P&L for middle office, something that causes cash flows on the balance sheet for an accountant, and possibly other things too.
Different parts of the business deal with the trade in different ways and perform different actions in relation to the trade.

Multiple business processes are involved. How do we build a clean model?

Rather than having a single trade model that satisfies all purposes and interacts with all the business processes of a trade, we can define separate models, known as bounded contexts, that only interact with the processes they need to.

We realise the boundaries through language. Sometimes people with different roles describe things that ostensibly sound the same. Maybe people refer to the same concept with different names. These differences in language help suggest boundaries between the models, which can then be clarified by conversations with SMEs.

Within a bounded context, a single name must refer to a single concept in the model and vice-versa.This concept is known as ‘ubiquitous language’, a set of terms common to SMEs and developers that is used throughout the system, whether in code, documentation, or anywhere else.

On the surface, it appears that having multiple models will add complexity and development time. In the medium to long term though, it is precisely this separation that enables development speed to increase as time goes on.
If we model the boundaries accurately, then small changes to the business processes cause small changes to the model in code.

This leads to a virtuous circle where the domain model can quickly be adapted to more closelymatch the business users’ internal model, which means it is subsequently more adaptable to further

changes. Development accelerates as the project matures and the domain model becomes richer and more accurate.

Additionally, work can be divided between people and teams more easily. Different teams can workon different models without stepping on each other’s toes. Development can be parallelised moreeasily, and value delivered sooner.

Canonical data models

Sometimes organisations define a canonical data model (CDM) for the entities within their business. How does this work with multiple domain models?

A CDM’s benefit is usually with data integration; it ensures that different systems can talk in acommon language.
If the CDM does not reflect the reality of how people in the organisation think of things, then it should not be used as a basis for internal domain models in systems.

We define anti-corruption layers around our domain models to translate from the more general CDM to the specifics of our domain model and vice-versa.

This enables the domain model to evolve independently of the CDM, and still be able to provide data in the correct format for integration.

If a CDM is already in place, it can be tempting to start development of a new project using the canonical data structures. However, this means that it becomes difficult to reach a virtuous circle due to the gap between how the SMEs think and how the system is modelled.
If there is a desire to use the CDM more widely, then it should be treated as a business change exercise to align the business to the model, rather than a technical one.

Communicating between different parts of the system

Business processes are event-driven

People do things in response to other things happening, such as another person having performed an action, or a system making data available. This is one of the reasons EventStorming works so well for modelling business processes.
It follows that systems designed to model these processes should be event-based. In practical terms, this means that individual components send messages to each other.

To integrate components together, we only need to consider the messages they consume and emit. The inner workings of each component are black boxes to anything outside

Eventual Consistency

One of the consequences of modelling a system in this way is that components communicating via messages are eventually consistent. By their nature, messages take a non-zero amount of time to propagate from one component to another.
For many purposes, such as viewing data, this is normally adequate. What we see is never completely up to date. Even in a traditional architecture there is latency between a request being received from a user, processed by the system, and displayed on screen. Adding some extra messaging will not generally make much difference.

To mitigate potential issues, we ensure that we push updates through to the UI as they are processed. The UI doesn’t simply get a one-time view of the data. Instead, it subscribes to a stream of data. If any updates are processed after the user makes the initial request, then they are pushed to the UI in near real-time.

In some cases, notably where the system needs data to do processing, the data must be consistent.This is normally to protect invariants in the domain model, e.g. ensuring a counterparty doesn’tbreak their credit limit while placing orders.

When data is modified we need to uphold any invariants. How this is done depends on the data store being used. It could be a transaction on a relational database or a compare and swap operation on a document database or event log.

Another approach that we have used with great success is to serialise incoming requests onto a queue. These are recorded in a log and applied in sequence to an in-memory representation of the domain model. This avoids the need for the compare and swap operation as the latest state of the model is always available in memory, and can be useful for high-performance, low-latency processing. More details on this approach can be found in another Adaptive white paper (Deheurles, 2017).

What is stored in each component, the consistency boundary, is determined simply by what must be consistent. What are the invariants we are trying to protect? The consistency boundary may be small or large depending on the requirements. For an order management component that matches buys and sells together, it may be the entire order book.

Physical Architecture

What we have described so far is mostly agnostic of physical architecture; we have called things components to avoid being specific about where things live physically.

Multiple components that communicate via messages can be deployed on the same server – even in the same process.

Building systems as components that communicate via messages allows us to distribute the components as microservices much more easily than as a single monolithic model.

However, distributed systems do come with a cost. Network failures, latency, and bandwidth all become much more significant concerns for communication between components.

Additionally, we must monitor services to check the system is working as expected. When there are connection problems, it can be difficult to understand what has happened.

Each project has unique requirements in this regard, though we have found it is often prudent to deploy components together initially and move to a distributed model only when it is needed.

DDD in user interfaces

Task-based user interfaces – allowing the user to perform actions to carry out their job

If we have modelled the business processes as sequences of actions and events, then task-based UIs can be built to allow users to perform these actions. It is not good enough to simply construct forms for data entry. Different business processes may involve the same data capture, but we want todifferentiate between them and record a user’s intent.

As their name suggests, task-based UIs, also known as inductive UIs, are modelled around the tasks users perform, the business processes, rather than simply being data entry forms (Microsoft, 2001). We can break down the data we capture into multiple commands that represent the intent of the user. Each command is applied to the model immediately and we can use that information to provide more context about the current task the user is performing.

This approach is common in modern consumer and mobile applications. We can use the same principles for line of business applications.

The image above shows a simple example of a task-based UI in Microsoft Windows 10. In the initial state on the left we have selected to use a background picture. The preview in the top section of the screen shows this picture and the bottom section provides extra functions that we might want – e.g. choosing a different picture or fitting it to the monitor in a different way.

As soon as we change the value in the ‘Background’ dropdown it issues a command to change thebackground style and modify the state of the screen. Selecting ‘Solid colour’ updates the preview inthe top section and removes the functionality in the bottom section that was only relevant for pictures. It is replaced by a colour selector that lets the user choose which solid colour they want as their background.

We can use the same principles to build business applications that record the decisions of the users in the form of commands and provide responsive screens that adapt to show what is needed for the business process being followed.

Data-centric UI design leads to an inflexible model

We have discussed how a good domain model will evolve and become ever closer to the SME’smental model. As knowledge is gained and the model evolves it should remain as independent of the UI as possible. A single component may not correspond to a single screen and vice-versa.

If we send the data from a screen to the model in one message, then it may need to go to multiple components. As we discussed earlier, these are eventually consistent with each other and communicate via messages. This means that extra coordination is required to ensure that all components have processed the message from the UI.

In practice, this is not feasible for every interaction between the UI and model. What often happens is that the domain model becomes constrained by the UI design. This means that we become unableto reach the goal of matching the SME’s internal model, which leads to the same issues that were discussed previously in the Communication and language are key section.

Commands represent a single action performed by a user. As such, they are generally a lot more granular than the data captured in a whole screen. This allows us to direct a command to the appropriate component for processing. As the model evolves we may send a command to a different component, but as it represents a single part of a business process it is unlikely to be processed by multiple components.

Command Query Responsibility Segregation (CQRS)

Separating what a user sees and what a user does

Once we have task-based user interfaces it becomes apparent that the data captured in commands is different to the data that is queried for the user to make decisions.

While the information needed to book a trade may consist of a few simple attributes, the information needed to make the decision is significantly different. We likely need to show current prices, PnL, and other things besides.

This means that the data stored from commands does not have to be in the same physical database as the data that is queried to show results on screen.
One of the challenges with an architecture centred around a single relational database is optimising for both reads and writes. By separating the two stores of data it allows us to optimise them independently, based on use cases.

The data store for actions, the write store, does not have queries performed against it. All it must do is load its current state.
As we are already using events for communication between components, one option is to simply store these events in a log.

Initial State + Event => New State

To load the current state, we simply accumulate the events that have occurred. The current balance of a bank account is the sum of all debits and credits that have been made to the account.

In some cases we may take snapshots of the current state rather than rebuilding from the underlying events. This is done if many events need to be processed and performance may be a concern.

The read store can use the same events and project them into an alternative view of the data that is more suitable for querying. This is never changed directly by user actions, only by the events that have been accepted and processed by the domain model.

Real-time polyglot data

A common problem across many projects is the need for data in different formats. We might use a document database to provide information for transactional screens, but also want an OLAP database for reporting queries.
With a traditional architecture we might run regular extracts from our transactional database to ensure that the OLAP database is never too far out of date.

Once we have separated the data store into a read store and a write store, then it is not much of a leap to imagine that we can have multiple read stores for different purposes.
The events are published to all read stores, so our reporting database is kept up to date at the same time as our transactional database. With this approach we can have near real-time data in multiple databases, each suited for the different queries we want to perform.

Similarly, if we need to scale out multiple instances of the same type of read store we can do this in the same way.

Gathering business insights from the events that have occurred

If we store our data as a sequence of events, it can have huge business value. As discussed in the introduction, over three quarters of businesses think that data collection and analysis can revolutionise how they work.
As we have modelled business processes, the events we capture describe what has happened in the business. We can use the events to provide alternative views of the data for business intelligence purposes.

Example: Warehouse operations

The current inventory in a warehouse is the accumulation of all the goods that have moved in and out.
We can calculate the current inventory balances from events as goods are moved, but this is not the primary reason for storing the events. If we have the history of movements, we can gather lots of other data too.

We can see how many times we have run out of a particular product, what the average utilisation of the warehouse is, which goods have been in the warehouse the longest, and much more besides.

We can extend the techniques we use to build multiple read stores from events to construct ad hoc reports. The events that we capture now can be used to build reports in the future. We cannot predict the future needs of the business. Nevertheless, by storing what has happened we give ourselves the opportunity to build unknown future reports using this information.

Causal relationships

Adding metadata to the events provides extra information in terms of how they are linked together. For example:

  • Correlation IDs that let us see when a single action causes multiple events that are correlated together
  • Causation IDs that tell us what caused a particular event to be emitted.

Doing this means that, as well as knowing what has happened in our business, we can start to understand why things have happened.
We may end up in the same state from multiple routes. Being able to trace back from that state through the events that led to it allows us to make decisions based on causal factors.

Example: Credit card limits

A credit card company may reduce customer credit limits for a number of reasons. For example:

  • A customer may have missed several payments.
  • Changing address too often might deem a customer to be a credit risk.
  • Sometimes a customer may reduce the limit themselves.

These paths may have different risk profiles and we may want to treat the customers differently in each case. All paths end up with the same event but knowing why the customer’s credit limit wasreduced allows us to categorise them independently.

We may want to categorise Event A -> Event B -> Credit Limit Reduced differently to Event X -> Event Y -> Credit Limit Reduced. Additionally, by storing the underlying events, it allows us modify our categorisations if our risk models change. If we had just stored a ‘reason’ attribute, this would not bepossible.

Utilising DevOps practices to deliver value sooner

We have now described the architecture and the methodologies to help build systems that enable strong revenue growth and capture data that is important to the business.

However, that is meaningless if the software cannot be released into a production environment in good time.

One of the consequences of modelling business processes in software is that the software must change if the business process changes. The modelling techniques discussed earlier in this document enable us to efficiently change the software when the domain changes.

Waiting weeks and months to deploy the changes will often result in lost revenue. When a competitor is first to market it can be devastating.

To ensure that the ability to release software is not a bottleneck, we employ DevOps practices and apply the principles of continuous delivery.

Continuous delivery

At its heart, agile software development relies on regularly iterating and improving through short feedback cycles. Continuous delivery principles extend this to say that each completed piece of work should be fit to be released to production. That is not to say that all changes are immediately released, but they must be able to be if needed.

To do this we build a pipeline that all code must pass through before being released. If it fails at any stage it progresses no further and must be fixed.

The diagram above shows a typical pipeline. Often, manual steps are present in amongst the automation; manual testing before releasing or a code review before integrating the code to the main branch.
The more automation we have in the pipeline the smoother we can make the process and the more regularly we can release.

It may seem foolish to release to production with little or no manual testing, but the reason releases in many companies are scary is because they are not practiced often enough and contain many changes.
Releasing more frequently ensures that we practice the process regularly and means each release is small enough to be well understood and be rolled back or corrected if issues are found.

To further mitigate risk we may use techniques similar to those used in A/B testing and only release new features to certain subsets of the user base.

Regular releases are highly correlated with better software delivery performance. Research into software delivery found distinct groups that were categorised into high, medium and low performers. The high performers were able to deploy on demand, multiple times per day in many cases, and had lower lead times to release, lower mean time to recover, and lower failure rate of changes (Forsgren, et al., 2018).

This is not necessarily a causal relationship; simply releasing more often will not reduce your failure percentage. Even so, those organisations who have been able to successfully apply the principles of continuous delivery have reaped the benefits in terms of more regular and successful releases. The high performers had a 70/30 ratio of new work to unplanned work and rework compared with a 58/42 ratio for the low performers.

The same research found that software delivery performance translated into better organisational performance. The high-performing group were twice as likely to exceed goals in profitability, market share, and productivity compared with companies in the low-performing group, those that released software less frequently than once per month.

Deployment automation and use of cloud resources

One of the steps in the pipeline above is automated deployment

It is important to disambiguate deployment and release. Deployment is the act of installing and configuring software artefacts on a server. Release is making these artefacts live and enabling users to connect to them. Software must be deployed before being released, but deployed software may never be released.
It follows that deployment carries nearly zero risk. Techniques like blue-green deployment mean we can deploy to a production server at any time but delay the release until we are ready to move users to the new functionality. The release itself may consist of nothing more than changing DNS entries.

Cloud services make this model easy to achieve. Servers become fungible, simply built from images when needed. A production release can become as simple as deploying the new functionality onto hardware alongside the current live version, switching DNS to the new servers when we want to release, and then stopping the old servers once we are confident the release is successful.

Modern containerisation and container orchestration solutions such as Docker and Kubernetes respectively mean that we can perform the same process very quickly at a more finely grained level than a physical server or virtual machine.

All deployments and releases must go through the pipeline

If we have built a deployment pipeline for production releases then we must use the same pipeline for releases to test environments. The artefacts that have been tested must be what is deployed and released to production.
Again, cloud services make this simple. We can provision test servers or containers of a suitable specification and perform the testing. Once testing is complete the same artefacts are easily deployed to production servers, ready for release.

If we build a deployment pipeline and automated tooling and don’t use it for releases, it will degradeover time.

The broken windows theory5 in criminology suggests that visible signs of crime encourage more, often serious, crime over time. If an area looks run down and neglected it is likely not to be heavily policed and a good target for further crime. Fixing broken windows contributes to the view that an area is well maintained.

A failing test in our deployment pipeline is analogous to a broken window. If it prevents usdeploying, then it will be fixed quickly, and our environment stays well maintained. If we don’t usethe pipeline, then the failing test has no consequences and remains unfixed. This is a sign that our environment is run down; soon more tests fail and eventually the pipeline becomes useless.

It takes significant effort to build and maintain a deployment pipeline like this, but the rewards are considerable. A team at HP applied techniques like those described here over the course of three years (Humble, et al., 2015). In that period, they went from having no automated tests to spending 23% of time writing and maintaining their tests.

The payoff was that they reduced product support from 25% of time to 10%, manual testing from 15% to 5%, and increased innovative development from ~5% of time to ~40%.

Conclusions

The seminal book on Domain Driven Design (Evans, 2003) is subtitled ‘Tackling Complexity in the Heart of Software’. The complexity here is not technical complexity but modelling the business domain.

Software projects that focus on technology to the detriment of modelling the business domain are doomed to failure. While the technology approach described here uses relatively novel techniques, they are a natural fit for describing business processes in software and we have seen great success by using them.

All but the smallest of companies is a software company to some degree and many of these companies follow complex business processes. We cannot remove business complexity with software, but where it exists we can use the techniques described here to build software models that enable innovation and allow companies to gain a competitive advantage and maximise profitability.

References

Cockburn, Alistair. 2005. Hexagonal Architecture. https://staging.cockburn.us/hexagonal- architecture/. [Online] 2005.
Deheurles, Olivier. 2017. Application Level Consensus. https://weareadaptive.com/wp- content/uploads/2017/04/Application-Level-Consensus.pdf. [Online] 2017.
Evans, Eric. 2003. Domain-Driven Design: Tackling Complexity in the Heart of Software. s.l. : Addison Wesley, 2003. ISBN: 0321125215.
Forsgren, Nicole, Humble, Jez and Kim, Gene. 2018. Accelerate: The Science of Lean Software and Devops: Building and Scaling High Performing Technology Organizations. s.l. : Trade Select, 2018. ISBN: 1942788339.
Humble, Jez, Molesky, Joanne and O’Reilly, Barry. 2015. Lean Enterprise: How High Performance Organizations Innovate at Scale. s.l. : O’Reilly Media, 2015. ISBN: 1449368425.
Microsoft. 2001. Microsoft Inductive User Interface Guidelines. https://msdn.microsoft.com/en- us/library/ms997506.aspx . [Online] 2001.
Mithas, Sunil, et al. 2012. Information Technology and Firm Profitability: Mechanisms and Empirical Evidence. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.387.4758&rep=rep1&type=pdf.[Online] 2012.
Ovenden, James. 2018. Data Analytics Top Trends In 2018.https://channels.theinnovationenterprise.com/articles/data-analytics-top-trends-in-2018. [Online] 2018.
Wardley, Simon. 2016. On being lost. https://medium.com/wardleymaps/on-being-lost- 2ef5f05eb1ec. [Online] 2016.

Acknowledgements

I would like to thank the following people who kindly accepted to review this article: Matt Barrett, Olivier Deheurles, Gregory Andrien, Fergus Keenan, Shaun Laurens, Harsha Sri-Narayana, James Watson and Daniel Smith.

Jon Clare

I am Head of Solutions Design at Adaptive. I’ve been designing and building software systems across the finance and commodities industries for more than fifteen years. I now run Adaptive’s design service, designing solutions to transform our clients’ businesses.

Adaptive

The Real-time trading experts. We are a software consultancy specialising in designing and building real-time trading systems for financial and commodity markets with offices in London, Barcelona, Montreal, and New York.

×

Contact us

By pressing "Send" I agree that I am happy to be contacted by Adaptive Financial Consulting. I can unsubscribe at any time. You can read more about our privacy policy here.

×

I would like to read the full story

By completing the below form I confirm I am happy to be contacted by Adaptive Financial Consulting Ltd. I can unsubscribe at anytime.