You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2021/05/10 12:01:00 UTC

[GitHub] [ozone] adoroszlai commented on a change in pull request #2203: HDDS-5151. Support ByteBuffer read in OzoneInputStream

adoroszlai commented on a change in pull request #2203:
URL: https://github.com/apache/ozone/pull/2203#discussion_r629298455



##########
File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
##########
@@ -507,4 +509,73 @@ private void handleReadError(IOException cause) throws IOException {
   public synchronized List<ChunkInputStream> getChunkStreams() {
     return chunkStreams;
   }
+
+  @Override
+  public synchronized int read(ByteBuffer byteBuffer) throws IOException {
+    if (byteBuffer == null) {
+      throw new NullPointerException();
+    }
+    int bufferLen = byteBuffer.remaining();
+    if (bufferLen == 0) {
+      return 0;
+    }
+
+    if (!initialized) {
+      initialize();
+    }
+
+    checkOpen();
+    int totalReadLen = 0;
+    int bufferLimit = byteBuffer.limit();
+    while (bufferLen > 0) {
+      // if we are at the last chunk and have read the entire chunk, return
+      if (chunkStreams.size() == 0 ||
+          (chunkStreams.size() - 1 <= chunkIndex &&
+              chunkStreams.get(chunkIndex)
+                  .getRemaining() == 0)) {
+        return totalReadLen == 0 ? EOF : totalReadLen;
+      }
+
+      // Get the current chunkStream and read data from it
+      ChunkInputStream current = chunkStreams.get(chunkIndex);
+      int numBytesToRead = Math.min(bufferLen, (int)current.getRemaining());
+      // change buffer limit
+      if (numBytesToRead < bufferLen) {
+        byteBuffer.limit(byteBuffer.position() + numBytesToRead);
+      }
+      int numBytesRead;
+      try {
+        numBytesRead = current.read(byteBuffer);
+        retries = 0; // reset retries after successful read
+      } catch (StorageContainerException e) {
+        if (shouldRetryRead(e)) {
+          handleReadError(e);
+          continue;
+        } else {
+          throw e;
+        }
+      } finally {
+        // restore buffer limit
+        if (numBytesToRead < bufferLen) {
+          byteBuffer.limit(bufferLimit);
+        }
+      }
+      if (numBytesRead != numBytesToRead) {
+        // This implies that there is either data loss or corruption in the
+        // chunk entries. Even EOF in the current stream would be covered in
+        // this case.
+        throw new IOException(String.format(
+            "Inconsistent read for chunkName=%s length=%d numBytesToRead= %d " +
+                "numBytesRead=%d", current.getChunkName(), current.getLength(),
+            numBytesToRead, numBytesRead));
+      }
+      totalReadLen += numBytesRead;
+      bufferLen -= numBytesRead;
+      if (current.getRemaining() <= 0 &&
+          ((chunkIndex + 1) < chunkStreams.size())) {
+        chunkIndex += 1;
+      }
+    }
+    return totalReadLen;
+  }

Review comment:
       Thanks @ChenSammi for the update.  The new implementation using strategy pattern looks good.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org