You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "jojochuang (via GitHub)" <gi...@apache.org> on 2024/01/11 07:17:08 UTC

[PR] Hdds 9130 rebase on hdds 9752 [ozone]

jojochuang opened a new pull request, #5980:
URL: https://github.com/apache/ozone/pull/5980

   ## What changes were proposed in this pull request?
   Piggyback PutBlock in WriteChunk if the size to be flushed is small.
   
   Please describe your PR in detail:
   * This PR adds an client side boolean configuration key ozone.client.stream.putblock.piggybacking to merge PutBlock request into WriteChunk, if the chunk size to be flushed to DataNode is small.
   * Client will detect if the DataNode supports this feature via DatanodeVersion COMBINED_PUTBLOCK_WRITECHUNK_RPC. The feature does not take effect unless DataNode supports it.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-9130
   
   ## How was this patch tested?
   
   Unit test, integration test. Manually tested on a real cluster.
   Hsync throughput and latency has noticeable improvement with this patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on PR #5980:
URL: https://github.com/apache/ozone/pull/5980#issuecomment-1934723458

   @ashishkumar50 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1542099955


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -854,6 +857,28 @@ ContainerCommandResponseProto handleWriteChunk(
       chunkManager
           .writeChunk(kvContainer, blockID, chunkInfo, data, dispatcherContext);
 
+      final boolean isCommit = dispatcherContext.getStage().isCommit();
+      if (isCommit && writeChunk.hasBlock()) {

Review Comment:
   COMBINED_PUTBLOCK_WRITECHUNK_RPC doesn't have the concept of "finalize". Once the runtime is updated, the datanode version becomes COMBINED_PUTBLOCK_WRITECHUNK_RPC.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi merged PR #5980:
URL: https://github.com/apache/ozone/pull/5980


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on PR #5980:
URL: https://github.com/apache/ozone/pull/5980#issuecomment-2025781935

   Thanks. Updated. Those are good suggestions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1537114215


##########
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestHSync.java:
##########
@@ -177,11 +178,14 @@ public static void init() throws Exception {
     bucket = TestDataUtil.createVolumeAndBucket(client, layout);
 
     // Enable DEBUG level logging for relevant classes
-    GenericTestUtils.setLogLevel(OMKeyRequest.LOG, Level.DEBUG);
-    GenericTestUtils.setLogLevel(OMKeyCommitRequest.LOG, Level.DEBUG);
-    GenericTestUtils.setLogLevel(OMKeyCommitRequestWithFSO.LOG, Level.DEBUG);
+    GenericTestUtils.setLogLevel(BlockManagerImpl.LOG, Level.DEBUG);
+    GenericTestUtils.setLogLevel(AbstractDatanodeStore.LOG, Level.DEBUG);
     GenericTestUtils.setLogLevel(BlockOutputStream.LOG, Level.DEBUG);
     GenericTestUtils.setLogLevel(BlockInputStream.LOG, Level.DEBUG);
+    GenericTestUtils.setLogLevel(KeyValueHandler.LOG, Level.DEBUG);
+    //GenericTestUtils.setLogLevel(OMKeyRequest.LOG, Level.DEBUG);

Review Comment:
   There 3 lines can be removed. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1537203018


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -854,6 +857,28 @@ ContainerCommandResponseProto handleWriteChunk(
       chunkManager
           .writeChunk(kvContainer, blockID, chunkInfo, data, dispatcherContext);
 
+      final boolean isCommit = dispatcherContext.getStage().isCommit();
+      if (isCommit && writeChunk.hasBlock()) {

Review Comment:
   Shall we check COMBINED_PUTBLOCK_WRITECHUNK_RPC is finalized here? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on PR #5980:
URL: https://github.com/apache/ozone/pull/5980#issuecomment-1887696919

   @ashishkumar50 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1537088986


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -594,14 +610,24 @@ private void handleFlushInternal(boolean close)
     if (totalDataFlushedLength < writtenDataLength) {
       refreshCurrentBuffer();
       Preconditions.checkArgument(currentBuffer.position() > 0);
-      if (currentBuffer.hasRemaining()) {
-        writeChunk(currentBuffer);
-      }
+
       // This can be a partially filled chunk. Since we are flushing the buffer
       // here, we just limit this buffer to the current position. So that next
       // write will happen in new buffer
-      updateFlushLength();
-      executePutBlock(close, false);
+      if (currentBuffer.hasRemaining()) {
+        if (writtenDataLength - totalDataFlushedLength < SMALL_CHUNK_THRESHOLD &&

Review Comment:
   @jojochuang , I think we can always use write chunk + put block if currentBuffer.hasRemaining is true.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1533038347


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -743,42 +769,95 @@ CompletableFuture<ContainerCommandResponseProto> writeChunkToContainer(
           + ", previous = " + previous);
     }
 
+    final List<ChunkBuffer> byteBufferList;
+    CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
+        validateFuture = null;
     try {
-      XceiverClientReply asyncReply = writeChunkAsync(xceiverClient, chunkInfo,
-          blockID.get(), data, tokenString, replicationIndex);
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          respFuture = asyncReply.getResponse();
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          validateFuture = respFuture.thenApplyAsync(e -> {
-            try {
-              validateResponse(e);
-            } catch (IOException sce) {
-              respFuture.completeExceptionally(sce);
-            }
-            return e;
-          }, responseExecutor).exceptionally(e -> {
-            String msg = "Failed to write chunk " + chunkInfo.getChunkName() +
-                " into block " + blockID;
-            LOG.debug("{}, exception: {}", msg, e.getLocalizedMessage());
-            CompletionException ce = new CompletionException(msg, e);
-            setIoException(ce);
-            throw ce;
-          });
+      BlockData blockData = null;
+
       if (config.getIncrementalChunkList()) {
         updateBlockDataForWriteChunk(chunk);
       } else {
         containerBlockData.addChunks(chunkInfo);
       }
+      //containerBlockData.addChunks(chunkInfo);
+      if (smallChunk) {
+        Preconditions.checkNotNull(bufferList);
+        byteBufferList = bufferList;
+        bufferList = null;
+        Preconditions.checkNotNull(byteBufferList);
+
+        blockData = containerBlockData.build();
+        LOG.debug("piggyback chunk list {}", blockData);
+
+        if (config.getIncrementalChunkList()) {
+          // remove any chunks in the containerBlockData list.
+          // since they are sent.
+          containerBlockData.clearChunks();

Review Comment:
   No -- 
   lastChunk is used to calculate checksum in PutBlock (PutBlock contains metadata)
   whereas containerBlockData is sent by WriteChunk (WriteChunk contains written data)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1542877541


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -562,7 +565,18 @@ private void writeChunk(ChunkBuffer buffer)
       bufferList = new ArrayList<>();
     }
     bufferList.add(buffer);
-    writeChunkToContainer(buffer.duplicate(0, buffer.position()));
+  }
+
+  private void writeChunk(ChunkBuffer buffer)
+      throws IOException {
+    writeChunkCommon(buffer);
+    writeChunkToContainer(buffer.duplicate(0, buffer.position()), false);
+  }
+
+  private void writeSmallChunk(ChunkBuffer buffer)

Review Comment:
   Shall we rename this writeSmallChunk to writeChunkAndPutBlock()?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1533045332


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -594,14 +608,24 @@ private void handleFlushInternal(boolean close)
     if (totalDataFlushedLength < writtenDataLength) {
       refreshCurrentBuffer();
       Preconditions.checkArgument(currentBuffer.position() > 0);
-      if (currentBuffer.hasRemaining()) {
-        writeChunk(currentBuffer);
-      }
+
       // This can be a partially filled chunk. Since we are flushing the buffer
       // here, we just limit this buffer to the current position. So that next
       // write will happen in new buffer
-      updateFlushLength();
-      executePutBlock(close, false);
+      if (currentBuffer.hasRemaining()) {
+        if (writtenDataLength - totalDataFlushedLength < 100 * 1024 &&
+            allowPutBlockPiggybacking) {
+          updateFlushLength();
+          writeSmallChunk(currentBuffer);
+        } else {
+          writeChunk(currentBuffer);
+          updateFlushLength();
+          executePutBlock(close, false);
+        }
+      } else {
+        updateFlushLength();
+        executePutBlock(close, false);

Review Comment:
   This is the typical case (no hflush) where a PutBlock is sent to update metadata after four 1-MB chunks are sent via WriteChunk requests)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1537102912


##########
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/ContainerCommandResponseBuilders.java:
##########
@@ -210,6 +211,28 @@ public static ContainerCommandResponseProto getPutFileResponseSuccess(
         .build();
   }
 
+  /**
+   * Gets a response for the putSmallFile RPC.

Review Comment:
   putSmallFile copy-paste left over? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1542100298


##########
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java:
##########
@@ -854,6 +857,28 @@ ContainerCommandResponseProto handleWriteChunk(
       chunkManager
           .writeChunk(kvContainer, blockID, chunkInfo, data, dispatcherContext);
 
+      final boolean isCommit = dispatcherContext.getStage().isCommit();
+      if (isCommit && writeChunk.hasBlock()) {

Review Comment:
   which should make sense because this PR does not change on-disk format.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1542878999


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -713,7 +736,7 @@ public boolean isClosed() {
    * @return
    */
   CompletableFuture<ContainerCommandResponseProto> writeChunkToContainer(
-      ChunkBuffer chunk) throws IOException {
+      ChunkBuffer chunk, boolean smallChunk) throws IOException {

Review Comment:
   Shall we rename this smallChunk to putBlockPiggybacking? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ashishkumar50 (via GitHub)" <gi...@apache.org>.
ashishkumar50 commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1531628026


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -211,6 +214,20 @@ public BlockOutputStream(
     this.clientMetrics = clientMetrics;
     this.pipeline = pipeline;
     this.streamBufferArgs = streamBufferArgs;
+    this.allowPutBlockPiggybacking = config.getEnablePutblockPiggybacking() &&
+            allDataNodesSupportPiggybacking();
+  }
+
+  boolean allDataNodesSupportPiggybacking() {

Review Comment:
   Make it private.



##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -594,14 +608,24 @@ private void handleFlushInternal(boolean close)
     if (totalDataFlushedLength < writtenDataLength) {
       refreshCurrentBuffer();
       Preconditions.checkArgument(currentBuffer.position() > 0);
-      if (currentBuffer.hasRemaining()) {
-        writeChunk(currentBuffer);
-      }
+
       // This can be a partially filled chunk. Since we are flushing the buffer
       // here, we just limit this buffer to the current position. So that next
       // write will happen in new buffer
-      updateFlushLength();
-      executePutBlock(close, false);
+      if (currentBuffer.hasRemaining()) {
+        if (writtenDataLength - totalDataFlushedLength < 100 * 1024 &&

Review Comment:
   Better we can define constant for 100 * 1024



##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -594,14 +608,24 @@ private void handleFlushInternal(boolean close)
     if (totalDataFlushedLength < writtenDataLength) {
       refreshCurrentBuffer();
       Preconditions.checkArgument(currentBuffer.position() > 0);
-      if (currentBuffer.hasRemaining()) {
-        writeChunk(currentBuffer);
-      }
+
       // This can be a partially filled chunk. Since we are flushing the buffer
       // here, we just limit this buffer to the current position. So that next
       // write will happen in new buffer
-      updateFlushLength();
-      executePutBlock(close, false);
+      if (currentBuffer.hasRemaining()) {
+        if (writtenDataLength - totalDataFlushedLength < 100 * 1024 &&
+            allowPutBlockPiggybacking) {
+          updateFlushLength();
+          writeSmallChunk(currentBuffer);
+        } else {
+          writeChunk(currentBuffer);
+          updateFlushLength();
+          executePutBlock(close, false);
+        }
+      } else {
+        updateFlushLength();
+        executePutBlock(close, false);

Review Comment:
   Whether PutBlock required here for small chunk?



##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -743,42 +769,95 @@ CompletableFuture<ContainerCommandResponseProto> writeChunkToContainer(
           + ", previous = " + previous);
     }
 
+    final List<ChunkBuffer> byteBufferList;
+    CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
+        validateFuture = null;
     try {
-      XceiverClientReply asyncReply = writeChunkAsync(xceiverClient, chunkInfo,
-          blockID.get(), data, tokenString, replicationIndex);
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          respFuture = asyncReply.getResponse();
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          validateFuture = respFuture.thenApplyAsync(e -> {
-            try {
-              validateResponse(e);
-            } catch (IOException sce) {
-              respFuture.completeExceptionally(sce);
-            }
-            return e;
-          }, responseExecutor).exceptionally(e -> {
-            String msg = "Failed to write chunk " + chunkInfo.getChunkName() +
-                " into block " + blockID;
-            LOG.debug("{}, exception: {}", msg, e.getLocalizedMessage());
-            CompletionException ce = new CompletionException(msg, e);
-            setIoException(ce);
-            throw ce;
-          });
+      BlockData blockData = null;
+
       if (config.getIncrementalChunkList()) {
         updateBlockDataForWriteChunk(chunk);
       } else {
         containerBlockData.addChunks(chunkInfo);
       }
+      //containerBlockData.addChunks(chunkInfo);

Review Comment:
   Remove commented code.



##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -743,42 +769,95 @@ CompletableFuture<ContainerCommandResponseProto> writeChunkToContainer(
           + ", previous = " + previous);
     }
 
+    final List<ChunkBuffer> byteBufferList;
+    CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
+        validateFuture = null;
     try {
-      XceiverClientReply asyncReply = writeChunkAsync(xceiverClient, chunkInfo,
-          blockID.get(), data, tokenString, replicationIndex);
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          respFuture = asyncReply.getResponse();
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          validateFuture = respFuture.thenApplyAsync(e -> {
-            try {
-              validateResponse(e);
-            } catch (IOException sce) {
-              respFuture.completeExceptionally(sce);
-            }
-            return e;
-          }, responseExecutor).exceptionally(e -> {
-            String msg = "Failed to write chunk " + chunkInfo.getChunkName() +
-                " into block " + blockID;
-            LOG.debug("{}, exception: {}", msg, e.getLocalizedMessage());
-            CompletionException ce = new CompletionException(msg, e);
-            setIoException(ce);
-            throw ce;
-          });
+      BlockData blockData = null;
+
       if (config.getIncrementalChunkList()) {
         updateBlockDataForWriteChunk(chunk);
       } else {
         containerBlockData.addChunks(chunkInfo);
       }
+      //containerBlockData.addChunks(chunkInfo);
+      if (smallChunk) {
+        Preconditions.checkNotNull(bufferList);
+        byteBufferList = bufferList;
+        bufferList = null;
+        Preconditions.checkNotNull(byteBufferList);
+
+        blockData = containerBlockData.build();
+        LOG.debug("piggyback chunk list {}", blockData);
+
+        if (config.getIncrementalChunkList()) {
+          // remove any chunks in the containerBlockData list.
+          // since they are sent.
+          containerBlockData.clearChunks();
+        }
+      } else {
+        byteBufferList = null;
+      }
 
+      XceiverClientReply asyncReply = writeChunkAsync(xceiverClient, chunkInfo,
+          blockID.get(), data, tokenString, replicationIndex, blockData);
+      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
+          respFuture = asyncReply.getResponse();
+      validateFuture = respFuture.thenApplyAsync(e -> {
+        try {
+          validateResponse(e);
+        } catch (IOException sce) {
+          respFuture.completeExceptionally(sce);
+        }
+        // if the ioException is not set, putBlock is successful
+        if (getIoException() == null && smallChunk) {
+          handleSuccessfulPutBlock(e.getWriteChunk().getCommittedBlockLength(),
+              asyncReply, flushPos, byteBufferList);
+        }
+        return e;
+      }, responseExecutor).exceptionally(e -> {
+        String msg = "Failed to write chunk " + chunkInfo.getChunkName() +
+            " into block " + blockID;
+        LOG.debug("{}, exception: {}", msg, e.getLocalizedMessage());
+        CompletionException ce = new CompletionException(msg, e);
+        setIoException(ce);
+        throw ce;
+      });
+      //containerBlockData.addChunks(chunkInfo);

Review Comment:
   Remove commented code.



##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -743,42 +769,95 @@ CompletableFuture<ContainerCommandResponseProto> writeChunkToContainer(
           + ", previous = " + previous);
     }
 
+    final List<ChunkBuffer> byteBufferList;
+    CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
+        validateFuture = null;
     try {
-      XceiverClientReply asyncReply = writeChunkAsync(xceiverClient, chunkInfo,
-          blockID.get(), data, tokenString, replicationIndex);
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          respFuture = asyncReply.getResponse();
-      CompletableFuture<ContainerProtos.ContainerCommandResponseProto>
-          validateFuture = respFuture.thenApplyAsync(e -> {
-            try {
-              validateResponse(e);
-            } catch (IOException sce) {
-              respFuture.completeExceptionally(sce);
-            }
-            return e;
-          }, responseExecutor).exceptionally(e -> {
-            String msg = "Failed to write chunk " + chunkInfo.getChunkName() +
-                " into block " + blockID;
-            LOG.debug("{}, exception: {}", msg, e.getLocalizedMessage());
-            CompletionException ce = new CompletionException(msg, e);
-            setIoException(ce);
-            throw ce;
-          });
+      BlockData blockData = null;
+
       if (config.getIncrementalChunkList()) {
         updateBlockDataForWriteChunk(chunk);
       } else {
         containerBlockData.addChunks(chunkInfo);
       }
+      //containerBlockData.addChunks(chunkInfo);
+      if (smallChunk) {
+        Preconditions.checkNotNull(bufferList);
+        byteBufferList = bufferList;
+        bufferList = null;
+        Preconditions.checkNotNull(byteBufferList);
+
+        blockData = containerBlockData.build();
+        LOG.debug("piggyback chunk list {}", blockData);
+
+        if (config.getIncrementalChunkList()) {
+          // remove any chunks in the containerBlockData list.
+          // since they are sent.
+          containerBlockData.clearChunks();

Review Comment:
   Do we need to remove last chunk as well here which is updated in `updateBlockDataForWriteChunk` in case of `IncrementalChunkList`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "ChenSammi (via GitHub)" <gi...@apache.org>.
ChenSammi commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1537072570


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -89,6 +91,8 @@ public class BlockOutputStream extends OutputStream {
   public static final KeyValue FULL_CHUNK_KV =
       KeyValue.newBuilder().setKey(FULL_CHUNK).build();
 
+  // Flushed chunk less than 100KB will combine PutBlock and WriteChunk in the same request.
+  private static final int SMALL_CHUNK_THRESHOLD = 100 * 1024;

Review Comment:
   Can we make this configurable? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org


Re: [PR] HDDS-9130. [hsync] Combine WriteData and PutBlock requests into one [ozone]

Posted by "jojochuang (via GitHub)" <gi...@apache.org>.
jojochuang commented on code in PR #5980:
URL: https://github.com/apache/ozone/pull/5980#discussion_r1540078037


##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java:
##########
@@ -594,14 +610,24 @@ private void handleFlushInternal(boolean close)
     if (totalDataFlushedLength < writtenDataLength) {
       refreshCurrentBuffer();
       Preconditions.checkArgument(currentBuffer.position() > 0);
-      if (currentBuffer.hasRemaining()) {
-        writeChunk(currentBuffer);
-      }
+
       // This can be a partially filled chunk. Since we are flushing the buffer
       // here, we just limit this buffer to the current position. So that next
       // write will happen in new buffer
-      updateFlushLength();
-      executePutBlock(close, false);
+      if (currentBuffer.hasRemaining()) {
+        if (writtenDataLength - totalDataFlushedLength < SMALL_CHUNK_THRESHOLD &&

Review Comment:
   Agreed. Will update the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org