Dev Diary: June 2020

H1 2020 - The Coronavirus Lockdown

WELCOME BACK

It has been an unprecedented few months with coronavirus. We hope that you are all well and staying safe.

Despite the global turmoil, and the upheaval of the way that we all live, there has been a lot going on behind the scenes at Dimensions Network. We are going to share the major points with you today.

AN APOLOGY

Before we update you all on the latest developments, the team would like to extend an apology regarding communication, and clarify the state of play going forward.

You, the community, is hugely important to the team and we understand that without you we wouldn't be at this stage. We are always open to listening to your thoughts and ideas.

However, being present on social media means we have to make a choice between responding to messages and building the exchange. In the long run, we believe that everyone will agree which is the right path to pursue.

Going forward, we will, as previously stated, release a new Dev Diary with the completion of every major milestone and aim to keep Reyno updated on the latest developments when possible to share with you all.

MIGRATION FROM DOCKER TO KUBERNETES

On to the Interesting stuff.

In late 2019, we built a one-click exchange installer using Docker Compose. This installer allowed us to speed up development by immediately making new features live. In the last few months, we have migrated our exchange package from Docker Compose, to a Kubernetes orchestration of multiple Docker containers.

Kubernetes is supported on all major cloud providers and gives us the flexibility of using any one of them going forward. For those interested in finding out more about Kubernetes:

'Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools, including Docker. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.'
https://en.wikipedia.org/wiki/Kubernetes

DATABASE REPLICATION

As illustrated in previous Dev Diaries, we operate with two separate database systems. The first one is a real-time database which is used for our lightning-fast order entry and matching, the second is an archive database which stores all of our historical data.

We have made a number of improvements to the archive database schema which will enable more advanced data analysis and allow us a better understanding of user activity. These data analysis features will be built into the administrative panel and will let us better understand trading flow and user behaviour. Key areas we are looking at include:

  • Individual user activity
  • Co-operation between users (e.g. wash trading)
  • Heat mapping what users are looking at, not necessary what they are trading
  • Platform load

ADMIN PANEL

Work restarted on the administrative interface and we have completed the components necessary to monitor all order and execution activity. The next stage is to fully integrate all of the administrative APIs and then work on the data analysis projects detailed in the above section.

Our goal is for the admin panel to be a one-stop facility for us to monitor exchange activity and take the necessary actions to ensure smooth operation. There is still more to do, and we will complete section by section as required.

PERFORMANCE TESTING

A key development stage we have always been looking forward to is performance testing. The only way you can work out the peak performance of a system is to push it to its limits and then make incremental improvements until you reach the performance you need.

The initial run of the performance testing module was a big disappointment and the order entry and execution performance was nowhere near what we had hoped for. This performance issue was primarily due to the synchronous nature of some of the core exchange modules.

We had written these components to operate in a synchronous nature so that we could better track and identify bugs as development progressed. We put our heads together and quickly moved the necessary modules to function asynchronously and scale to millions of simultaneous instances.

Performance testing is fraught with many pitfalls, the biggest being over-optimisation. When you know exactly how something functions, you can design the performance tests to ‘cheat’ the system and give amazing results. With this in mind we created three testing scenarios:

  • Worst case
  • Random / realistic Case
  • Best case

We then completed three sets of tests to try to understand where the realistic performance would be. Based on our tests we have demonstrated order entry performance on our office server of approximately:

300,000 orders per second on a single orderbook.

Based on the sharded nature of our system, we anticipate this to scale somewhat linearly with the number of orderbooks until another bottleneck is hit. That is to say we anticipate performance of:

  • 600,000 orders per second for two orderbooks
  • 900,000 orders per second for three orderbooks.
  • ....

By orderbooks we mean trading pairs, such as BTC-USD, BTC-ETH and ETH-USD. So, in this case, there would be three orderbooks.

There are lots of other performance metrics we have been looking at, including the optimal number of CPU cores, memory usage and disk space growth. We have not yet settled on any firm numbers for these and we will take another look once we have some time to reflect on the data and our approach. We will include this information on a later Dev Diary once we are happy that the numbers are clear and representative.

There is still lots of room for improvement, and it will be interesting to see how much better these numbers will be on a state-of-the-art cloud provider such as AWS, instead of our local office server. We have identified some specific areas which we are planning to upgrade in the future, but at this point we are happy that we have solved the key performance bottlenecks.

API INTERFACE LIBRARIES

We have started writing a number of interface libraries which can be used to access our APIs. The first of which is a Python module which will be freely available for developers to use to quickly and simply interface with our exchange. The added benefits of writing these libraries is that it allows us to better understand how our users will interface with the exchange, and be able to make small tweaks here and there to the APIs to improve flow.

LOOKING FORWARD

The migration to Kubernetes is a big step and has given us a flexible deployment package. In the following months, we will focus on improving our Kubernetes deployment package and making consistent improvements to all areas of the exchange. Our goal is to always have a production-ready build of the exchange platform so that when we make changes, we can see the improvements immediately.

As always with the Dimensions Network team it is build, build, build right now and for the foreseeable future. We are making this happen, one step at a time, with a threadbare team and minimum funds.

As mentioned above, bear with us, ask questions and stay positive - we’re getting closer and closer to the goal that we all share: the launch of Dimensions Network.

Thanks for being a part of our community!

Dev Diary: June 2020
Share this