You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2010/01/13 17:50:54 UTC
[jira] Created: (HDFS-895) Allow hflush/sync to occur in parallel
with new writes to the file
Allow hflush/sync to occur in parallel with new writes to the file
------------------------------------------------------------------
Key: HDFS-895
URL: https://issues.apache.org/jira/browse/HDFS-895
Project: Hadoop HDFS
Issue Type: Improvement
Components: hdfs client
Reporter: dhruba borthakur
In the current trunk, the HDFS client methods writeChunk() and hflush./sync are syncronized. This means that if a hflush/sync is in progress, an applicationn cannot write data to the HDFS client buffer. This reduces the write throughput of the transaction log in HBase.
The hflush/sync should allow new writes to happen to the HDFS client even when a hflush/sync is in progress. It can record the seqno of the message for which it should receice the ack, indicate to the DataStream thread to star flushing those messages, exit the synchronized section and just wai for that ack to arrive.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.