RonDB is a fork of the MySQL NDB Cluster, which is one of the storage engines supported by the MySQL server. Most importantly, it is the storage engine which is distributed by nature to support high availability. By being an in-memory database, it is also optimized for low latency and high throughput. At Hopsworks, these are exactly the properties we need to support our Feature Store, so that e.g. querying features as input to ML models happens instantaneously.
The reason we forked the MySQL NDB Cluster was because we wanted the performance, but also to make it more easy to use. Using it requires knowledge of its architecture, and maxing out its capabilities asks for a deep technical understanding of its internals. At Hopsworks we saw the chance to build a managed service around it, so that not only the Feature Store and HopsFS could profit from it, but at some point any developer that required a database with its outstanding characteristics.
Meanwhile, RonDB is available as a managed service on managed.hopsworks.ai in conjunction with the Feature Store. Here, more features are continuously being added such as online scaling and online software upgrades. However, this still requires the user to have cloud credentials and sign up for Hopsworks. For us, the next logical step to make RonDB even more accessible to new developers, was therefore to work on an option to create a standalone RonDB cluster locally with Docker Compose. This is what we describe in this blog post, and the corresponding codebase is a separate repository logicalclocks/rondb-docker, which is a fork and complete rewrite of the mysql/mysql-docker repository.
At its core, this repository is a bash script, which creates a docker-compose file with a dynamic amount of management, data node, MySQL server and API/benchmarking containers. It generates and mounts the required configuration files and creates volumes for all the log and data directories. The RonDB version can be supplied by the user by referencing any of the RonDB tarballs available on repo.hops.works/master. The Dockerfile is identical for all containers and it mimics the directory structure of the VMs that we spawn for RonDB clusters on hopsworks.ai.
The following shows a sample generated docker-compose file for the configuration:
Sample docker-compose file:
Evidently, memory management for the containers was one of the challenges when creating this repository. The memory allocated is both kept in check by the Docker Compose fields “deploy.resources”, but also by the auto-generated configuration file for RonDB (config.ini).
The following image shows the resource usage of this cluster using the Docker Desktop extension “Resource Usage”. Note that the total CPU percentage is out of 14.61/1000%, corresponding to 10 CPU cores. Also note that this cluster is currently not under load and therefore the resource usage is low.
We have three main use cases in mind for outside developers using this repository:
In terms of benchmarking, we have supplied all necessary files and commands to easily test RonDB with Sysbench or DBT2 and will in the future also support YCSB. Once again, the directory structure is identical to the structure on hopsworks.ai, so that the RonDB documentation on benchmarking can also be followed here. Whilst benchmarking performance will be far better on RonDB clusters with large, separate VMs per node, a developer can now quickly become acquainted with benchmarking before paying for VMs.
For the RonDB team itself, this repository has become a building block to accelerate testing of standalone as well as managed RonDB. In the coming iterations, we will use this repository to showcase new applications we have developed towards the database, as well as let users experiment with managed RonDB locally.
The flagship application we are currently working on is the REST API server, which is an alternative to the MySQL server and allows users to do batched operations towards RonDB in a key-value manner. A managed RonDB Docker cluster on the other hand will allow users to evaluate online scaling, software upgrades and reconciliation locally for themselves. Reconciliation enables a self-healing cluster, which always strives towards a desired state, similar to how Kubernetes operates.
To follow up on this blog post, we have created a quick demo, which shows how to use this repository to create a cluster and run benchmarks on it: