Monday, May 1, 2017

The birth of RocksDB-Cloud

I have been working on an open source project called RocksDB-Cloud for the last few months. What does it do and why did we build it?
You might be using RocksDB on your application servers or database servers on a public cloud service like AWS or Azure. RocksDB is a high performance embedded storage engine.  But when your server machine instance dies and restarts on a different machine in the cloud, then you lose all your data. This is painful and many engineering teams have build own custom replication code around RocksDB to protect against this scenario. RocksDB-Cloud is built to provide a readymade solution to this problem so that you do not have to write any custom code for making RocksDB data durable and available.
RocksDB-Cloud provides three main advantages for Cloud environments:
  1. A RocksDB-Cloud instance is durable. Continuous and automatic replication of db data and metadata to Cloud Storage (e.g. AWS S3). In the event that the RocksDB-Cloud machine dies, another process on any other EC2 machine can reopen the same RocksDB-Cloud database 
  2. A RocksDB-Cloud instance is cloneable. RocksDB-Cloud supports a primitive called zero-copy-clone that allows another instance of RocksDB-Cloud on another machine to clone an existing db. Both master and slave RocksDB-Cloud instance can run in parallel and they share some set of common database files.
  3. A RocksDB-Cloud instance automatically places hot data in SSD and cold data in Cloud Storage.  The entire database storage footprint need not be resident on costly SSD. The Cloud storage contains the entire database and the local storage contains only the files that are in the working set.
RocksDB-Cloud is a Open-Source C++ library that brings the power of RocksDB to AWS, Google Cloud and Microsoft Azure. It leverages the power of RocksDB to provide fast key-value access to data stored in Flash and RAM systems. It provides for data durability even in the face of machine failures by integrations with cloud services like AWS-S3. It allows a cost-effective way to utilize the rich hierarchy of storage services (based on RAM, NvMe, SSD, Disk, Cold Storage, etc) that are offered by most cloud providers. 
Compatibility with RocksDB
RocksDB-Cloud is API compatible, data format compatible and license-compatible with RocksDB which means that your applications do not have to change if you move from RocksDB to RocksDB-Cloud. 
Workload categories
There is a category of workload where RocksDB is used to tail data from a distributed log storage system. In this case, the RocksDB write-ahead-log is switched off and the application-log is in front of the database. The RocksDB-Cloud library persists every new sst file to the cloud-storage. Reads occur by demand-paging-in relevant data blocks from cloud-storage into the locally attached SSD-based persistent cache. This is shown in the following picture.

There is another category of workload where RocksDB is used as a read/write datastore. Applications can issue gets/puts to the datastore. In this case, RocksDB-Cloud persists the write-ahead-log into a cloud-based logging system like AWS-Kinesis. A slave RocksDB-Cloud instance can be configured to tail this write-ahead-log and keep itself updated. The following picture shows how a clone instance keeps itself upto-date by cloning a base-image from the cloud storage and then keeping itself upto-date by tailing the write-ahead-logs.

These two workloads listed above are described in greater detail in a  set of architecture slides that I recently delivered at PerconaLive2017 . 

In the current implementation, RocksDB-Cloud support only AWS-S3 but won't it be cool if we can have precisely the same api on Microsoft Azure? That is precisely what Min Wei, an engineer from Microsoft, is working on. We are working together to build something useful.  Here is a set of slides that I recently delivered at PerconaLive2017 that describes the high-level architecture of RocksDB-Cloud. 

If you plan to tinker with RocksDB-Cloud, please start with this exampleCome join us in our google group and we can all hack together!

Wednesday, April 26, 2017

Research topics in RocksDB

I have been asked by various students and professors about research-and-development-ideas with RocksDB.  I remember the first time I was asked about this was by a student at UC Berkeley when I presented at the AMP-Lab-Seminar in 2013. I have been accumulating a list of these topics in my mind since that time, and here it is:

  • Time Series: RocksDB stores keys-and-values in the database. RocksDB uses delta-encoding of keys to reduce the storage footprint. Can we get better compressibility if the key-prefix is of a specific type? If the key-prefix is a timestamp, can we use sophisticated methods to achieve better storage efficiency? Algorithms in the lines of  the delta-of-delta encoding schemes as described in the Gorilla paper are possible candidates to consider. That paper also describes how a an XOR scheme can be used to compress values that are floating points.
  • Byte Addressable Storage: There are storage devices that are byte address-able and RocksDB's PlainTable format is supposed to be used for these types of storage media. However, supporting delta-encoding of keys and supporting compression (e.g. gzip, zlib, etc) are a challenge in this data format. Design and implement delta-encoding and compression for PlainTable format.
  • Read-triggered-compactions: New writes to the db causes new files to be created, which in turn, triggers a compaction process. In the current implementation, all compactions are triggered by these writes. However, there are scenarios that can benefit when read/scan requests trigger compaction. For example, if a majority of read-requests are needing to check for that key in multiple files, then it might be better to compact all those files so that succeeding read-requests can be served with fewer reads from storage. Design and implement read-triggered compactions.
  • Tiered storage: RocksDB has a feature that allows caching data in persistent storage cache. This means that sst files that reside on slow remote storage can be brought in on demand and cached on the specified persistent storage cache. The current implementation considers the persistent storage cache as a single homogeneous tier. However, there are scenarios when multiple storage systems (with different latency of access) need to be configured for caching. For example, suppose all your sst files are on network storage. And you want to use locally attached spinning disks as well as locally attached SSD to be part of your persistent storage cache. Enhance RocksDB to support more than one tier of storage for its persistent storage cache.
  • Versioned store: RocksDB always keeps the latest value for a key. Enhance RocksDB so that it can store and allow access to n most recent updates to a key.
In a future blog post, I will describe some of the advantages of running RocksDB on the cloud (e.g. AWS, Azure, etc). Stay tuned.