You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Bharat Viswanadham (JIRA)" <ji...@apache.org> on 2019/05/09 20:40:00 UTC

[jira] [Created] (HDDS-1512) Implement DoubleBuffer in OzoneManager

Bharat Viswanadham created HDDS-1512:
----------------------------------------

             Summary: Implement DoubleBuffer in OzoneManager
                 Key: HDDS-1512
                 URL: https://issues.apache.org/jira/browse/HDDS-1512
             Project: Hadoop Distributed Data Store
          Issue Type: New Feature
            Reporter: Bharat Viswanadham
            Assignee: Bharat Viswanadham


This Jira is created to implement DoubleBuffer in OzoneManager to flush transactions to OM DB.

 
h2. Flushing Transactions to RocksDB:

We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We shall flush RocksDB transactions in batches, instead of current way of using rocksdb.put() after every operation. At a given time only one batch will be outstanding for flush while newer transactions are accumulated in memory to be flushed later.

 

In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is readyBuffer. We add entry to current buffer, and we check if another flush call is outstanding. If not, we flush to disk Otherwise we add entries to otherBuffer while sync is happening.

 

In this if sync is happening, we shall add new requests to other buffer and when we can sync we use *RocksDB batch commit to sync to disk, instead of rocksdb put.*

 

Note: If flush to disk is failed on any OM, we shall terminate the OzoneManager, so that OM DB’s will not diverge. Flush failure should be considered as catastrophic failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org