You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2020/05/27 03:34:39 UTC

[GitHub] [hbase] javierluca opened a new pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

javierluca opened a new pull request #1781:
URL: https://github.com/apache/hbase/pull/1781


   https://issues.apache.org/jira/browse/HBASE-24435
   
   Conflicts:
   	hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
   
   
   Cherry-picked https://github.com/apache/hbase/commit/71ed7033675149956de855b6782e1e22fc908dc8 with just a few adjustments to adapt it to current branch-1.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634675370


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  14m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   9m  3s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 35s |  branch-1 passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 21s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  branch-1 passed with JDK v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   3m 15s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 47s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 39s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 54s |  hbase-server: The patch generated 1 new + 92 unchanged - 1 fixed = 93 total (was 93)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 59s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   6m  8s |  Patch does not cause any errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   5m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hbase-hadoop-compat in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 42s |  hbase-hadoop2-compat in the patch passed.  |
   | -1 :x: |  unit  | 197m 22s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 25s |  The patch does not generate ASF License warnings.  |
   |  |   | 275m 47s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hbase.client.TestAdmin2 |
   |   | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
   |   | hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint |
   |   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1781 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 0ba2801f79c0 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh |
   | git revision | branch-1 / 9f12ef0 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/4/artifact/out/diff-checkstyle-hbase-server.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/4/artifact/out/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/4/testReport/ |
   | Max. process+thread count | 5033 (vs. ulimit of 10000) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/4/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] javierluca edited a comment on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
javierluca edited a comment on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-633963904






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430920249



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);

Review comment:
       there're many.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] HorizonNet commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
HorizonNet commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-633959507


   Should the conflicts line be part of the commit message?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634892022


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   7m 58s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 19s |  branch-1 passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m  9s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  0s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  branch-1 passed with JDK v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 40s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  3s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 21s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 59s |  Patch does not cause any errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   4m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  hbase-hadoop-compat in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 38s |  hbase-hadoop2-compat in the patch passed.  |
   | +1 :green_heart: |  unit  | 128m 50s |  hbase-server in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  The patch does not generate ASF License warnings.  |
   |  |   | 182m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1781 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux b1662a00ae6f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh |
   | git revision | branch-1 / 9f12ef0 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/5/testReport/ |
   | Max. process+thread count | 4112 (vs. ulimit of 10000) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/5/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430920890



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);
+    stm.readFully(7*blockSize, actual, 0, 4096);
+    actual = new byte[3*4096];
+    stm.readFully(0*blockSize, actual, 0, 3*4096);
+    checkAndEraseData(actual, 0, expected, "Pread Test 7");
+    actual = new byte[8*4096];
+    stm.readFully(3*blockSize, actual, 0, 8*4096);
+    checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8");
+    // read the tail
+    stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2);
+    IOException res = null;
+    try { // read beyond the end of the file
+      stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize);
+    } catch (IOException e) {
+      // should throw an exception
+      res = e;
+    }
+    assertTrue("Error reading beyond file boundary.", res != null);
+
+    stm.close();
+  }
+
+  private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) {
+    for (int idx = 0; idx < actual.length; idx++) {
+      assertEquals(message+" byte "+(from+idx)+" differs. expected "+
+          expected[from+idx]+" actual "+actual[idx],
+        actual[idx], expected[from+idx]);
+      actual[idx] = 0;
+    }
+  }
+
+  private void doPread(FSDataInputStream stm, long position, byte[] buffer,
+    int offset, int length) throws IOException {
+    int nread = 0;
+    // long totalRead = 0;

Review comment:
       Just remove instead of comments if you don't need following codes




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] clarax commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
clarax commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634924542


   Flaky tests passed. Would be nice to fix the <code> block. Otherwise LGTM. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430948187



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);
+    stm.readFully(7*blockSize, actual, 0, 4096);
+    actual = new byte[3*4096];
+    stm.readFully(0*blockSize, actual, 0, 3*4096);
+    checkAndEraseData(actual, 0, expected, "Pread Test 7");
+    actual = new byte[8*4096];
+    stm.readFully(3*blockSize, actual, 0, 8*4096);
+    checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8");
+    // read the tail
+    stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2);
+    IOException res = null;
+    try { // read beyond the end of the file
+      stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize);
+    } catch (IOException e) {
+      // should throw an exception
+      res = e;
+    }
+    assertTrue("Error reading beyond file boundary.", res != null);
+
+    stm.close();
+  }
+
+  private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) {
+    for (int idx = 0; idx < actual.length; idx++) {
+      assertEquals(message+" byte "+(from+idx)+" differs. expected "+
+          expected[from+idx]+" actual "+actual[idx],
+        actual[idx], expected[from+idx]);
+      actual[idx] = 0;
+    }
+  }
+
+  private void doPread(FSDataInputStream stm, long position, byte[] buffer,
+    int offset, int length) throws IOException {
+    int nread = 0;
+    // long totalRead = 0;

Review comment:
       no need to keep as branch-2, there're numerous different already.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] javierluca commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
javierluca commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-633963904


   I see. Seems line number was not added in the original commit: https://github.com/apache/hbase/commit/71ed7033675149956de855b6782e1e22fc908dc8


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] javierluca commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
javierluca commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430932500



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);
+    stm.readFully(7*blockSize, actual, 0, 4096);
+    actual = new byte[3*4096];
+    stm.readFully(0*blockSize, actual, 0, 3*4096);
+    checkAndEraseData(actual, 0, expected, "Pread Test 7");
+    actual = new byte[8*4096];
+    stm.readFully(3*blockSize, actual, 0, 8*4096);
+    checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8");
+    // read the tail
+    stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2);
+    IOException res = null;
+    try { // read beyond the end of the file
+      stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize);
+    } catch (IOException e) {
+      // should throw an exception
+      res = e;
+    }
+    assertTrue("Error reading beyond file boundary.", res != null);
+
+    stm.close();
+  }
+
+  private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) {
+    for (int idx = 0; idx < actual.length; idx++) {
+      assertEquals(message+" byte "+(from+idx)+" differs. expected "+
+          expected[from+idx]+" actual "+actual[idx],
+        actual[idx], expected[from+idx]);
+      actual[idx] = 0;
+    }
+  }
+
+  private void doPread(FSDataInputStream stm, long position, byte[] buffer,
+    int offset, int length) throws IOException {
+    int nread = 0;
+    // long totalRead = 0;

Review comment:
       I see. I thought it would be better to keep it as similar as branch-2:
   https://github.com/apache/hbase/blob/441935a9d9f0df22f2d1b1c4ee57b13dacc00e7f/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Or master:
   https://github.com/apache/hbase/blob/476cb1623274acc35da8763a40e68bbc3fd9d165/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Was not sure about the policies in this case. 
   Removed for now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430829094



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() {
     return FSDataInputStreamWrapper.getZeroCopyBytesRead();
   }
 
+  @Override
+  public long getHedgedReadOps() {
+    return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadOps();

Review comment:
       nit, space between 'null?', '0:'

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() {
     return FSDataInputStreamWrapper.getZeroCopyBytesRead();
   }
 
+  @Override
+  public long getHedgedReadOps() {
+    return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadOps();
+  }
+
+  @Override
+  public long getHedgedReadWins() {
+    return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadWins();

Review comment:
       ditto

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() {
     return FSDataInputStreamWrapper.getZeroCopyBytesRead();
   }
 
+  @Override
+  public long getHedgedReadOps() {
+    return this.dfsHedgedReadMetrics == null? 0 : this.dfsHedgedReadMetrics.getHedgedReadOps();

Review comment:
       space between 'null?', still missing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634106442


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   7m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |  42m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  52m 53s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 19s |  branch-1 passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 11s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  3s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  branch-1 passed with JDK v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 36s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  0s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 18s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 32s |  hbase-server: The patch generated 5 new + 92 unchanged - 1 fixed = 97 total (was 93)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedjars  |   2m 49s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 37s |  Patch does not cause any errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   4m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  hbase-hadoop-compat in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 38s |  hbase-hadoop2-compat in the patch passed.  |
   | +1 :green_heart: |  unit  | 127m 19s |  hbase-server in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 40s |  The patch does not generate ASF License warnings.  |
   |  |   | 271m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1781 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux a431e2868dee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh |
   | git revision | branch-1 / 3235b56 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/artifact/out/diff-checkstyle-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/testReport/ |
   | Max. process+thread count | 4055 (vs. ulimit of 10000) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/1/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634535064


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 27s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m 24s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 23s |  branch-1 passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 24s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 27s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  branch-1 passed with JDK v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 39s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 18s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   5m 19s |  Patch does not cause any errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   5m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hbase-hadoop-compat in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 45s |  hbase-hadoop2-compat in the patch passed.  |
   | +1 :green_heart: |  unit  | 137m  6s |  hbase-server in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  The patch does not generate ASF License warnings.  |
   |  |   | 195m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1781 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux b9735f0917ee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh |
   | git revision | branch-1 / 9f12ef0 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/3/testReport/ |
   | Max. process+thread count | 4679 (vs. ulimit of 10000) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/3/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] clarax commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
clarax commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430772485



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
##########
@@ -592,9 +593,9 @@ public void markRegionsRecovering(ServerName server, Set<HRegionInfo> userRegion
    * @return whether log is replaying
    */
   public boolean isLogReplaying() {
-    if (server.getCoordinatedStateManager() == null) return false;
-    return ((BaseCoordinatedStateManager) server.getCoordinatedStateManager())
-        .getSplitLogManagerCoordination().isReplaying();
+    CoordinatedStateManager m = server.getCoordinatedStateManager();
+    if (m == null) return false;

Review comment:
       Checkstyle conflicts need to be fixed by adding braces.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
##########
@@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) {
     int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize);
     conf.setIfUnset(dfsKey, Integer.toString(hbaseSize));
   }
+
+  /**
+   * @param c
+   * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on hdfs.
+   * @throws IOException

Review comment:
       can be removed

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
##########
@@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) {
     int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize);
     conf.setIfUnset(dfsKey, Integer.toString(hbaseSize));
   }
+
+  /**
+   * @param c

Review comment:
       can be removed.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
##########
@@ -1780,4 +1782,47 @@ public static void checkShortCircuitReadBufferSize(final Configuration conf) {
     int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize);
     conf.setIfUnset(dfsKey, Integer.toString(hbaseSize));
   }
+
+  /**
+   * @param c
+   * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on hdfs.
+   * @throws IOException
+   */
+  public static DFSHedgedReadMetrics getDFSHedgedReadMetrics(final Configuration c)
+      throws IOException {
+    if (!isHDFS(c)) return null;

Review comment:
       Checkstyle conflict. Need braces.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -40,7 +41,9 @@
 import org.apache.hadoop.hbase.io.hfile.CacheStats;
 import org.apache.hadoop.hbase.wal.WALProvider;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.hdfs.DFSHedgedReadMetrics;

Review comment:
       This is from 2.4. https://issues.apache.org/jira/browse/HBASE-7509 Should be good.
   

##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,146 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   * @throws Exception

Review comment:
       Not necessary.

##########
File path: hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
##########
@@ -504,6 +504,11 @@ public void getMetrics(MetricsCollector metricsCollector, boolean all) {
               rsWrap.getCompactedCellsSize())
           .addCounter(Interns.info(MAJOR_COMPACTED_CELLS_SIZE, MAJOR_COMPACTED_CELLS_SIZE_DESC),
               rsWrap.getMajorCompactedCellsSize())
+
+          .addCounter(Interns.info(HEDGED_READS, HEDGED_READS_DESC), rsWrap.getHedgedReadOps())
+          .addCounter(Interns.info(HEDGED_READ_WINS, HEDGED_READ_WINS_DESC),
+              rsWrap.getHedgedReadWins())

Review comment:
       Validated from HBASE15550. LGTM.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] javierluca commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
javierluca commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430932500



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);
+    stm.readFully(7*blockSize, actual, 0, 4096);
+    actual = new byte[3*4096];
+    stm.readFully(0*blockSize, actual, 0, 3*4096);
+    checkAndEraseData(actual, 0, expected, "Pread Test 7");
+    actual = new byte[8*4096];
+    stm.readFully(3*blockSize, actual, 0, 8*4096);
+    checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8");
+    // read the tail
+    stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2);
+    IOException res = null;
+    try { // read beyond the end of the file
+      stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize);
+    } catch (IOException e) {
+      // should throw an exception
+      res = e;
+    }
+    assertTrue("Error reading beyond file boundary.", res != null);
+
+    stm.close();
+  }
+
+  private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) {
+    for (int idx = 0; idx < actual.length; idx++) {
+      assertEquals(message+" byte "+(from+idx)+" differs. expected "+
+          expected[from+idx]+" actual "+actual[idx],
+        actual[idx], expected[from+idx]);
+      actual[idx] = 0;
+    }
+  }
+
+  private void doPread(FSDataInputStream stm, long position, byte[] buffer,
+    int offset, int length) throws IOException {
+    int nread = 0;
+    // long totalRead = 0;

Review comment:
       I see. Agree on these kind of comments should go away. HoweverI thought it would be better to keep it as similar as branch-2:
   https://github.com/apache/hbase/blob/441935a9d9f0df22f2d1b1c4ee57b13dacc00e7f/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Or master:
   https://github.com/apache/hbase/blob/476cb1623274acc35da8763a40e68bbc3fd9d165/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Was not sure about the policies in this case. 
   Removed for now.

##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);
+    stm.readFully(7*blockSize, actual, 0, 4096);
+    actual = new byte[3*4096];
+    stm.readFully(0*blockSize, actual, 0, 3*4096);
+    checkAndEraseData(actual, 0, expected, "Pread Test 7");
+    actual = new byte[8*4096];
+    stm.readFully(3*blockSize, actual, 0, 8*4096);
+    checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8");
+    // read the tail
+    stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2);
+    IOException res = null;
+    try { // read beyond the end of the file
+      stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize);
+    } catch (IOException e) {
+      // should throw an exception
+      res = e;
+    }
+    assertTrue("Error reading beyond file boundary.", res != null);
+
+    stm.close();
+  }
+
+  private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) {
+    for (int idx = 0; idx < actual.length; idx++) {
+      assertEquals(message+" byte "+(from+idx)+" differs. expected "+
+          expected[from+idx]+" actual "+actual[idx],
+        actual[idx], expected[from+idx]);
+      actual[idx] = 0;
+    }
+  }
+
+  private void doPread(FSDataInputStream stm, long position, byte[] buffer,
+    int offset, int length) throws IOException {
+    int nread = 0;
+    // long totalRead = 0;

Review comment:
       I see. Agree on these kind of comments should go away. However thought it would be better to keep it as similar as branch-2:
   https://github.com/apache/hbase/blob/441935a9d9f0df22f2d1b1c4ee57b13dacc00e7f/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Or master:
   https://github.com/apache/hbase/blob/476cb1623274acc35da8763a40e68bbc3fd9d165/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java#L612-L641
   Was not sure about the policies in this case. 
   Removed for now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634445710


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m 51s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 36s |  branch-1 passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 19s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   4m  8s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  branch-1 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  branch-1 passed with JDK v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   3m 46s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 27s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 36s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedjars  |   3m 54s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   6m 52s |  Patch does not cause any errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  the patch passed with JDK v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   5m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hbase-hadoop-compat in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 46s |  hbase-hadoop2-compat in the patch passed.  |
   | +1 :green_heart: |  unit  | 136m 25s |  hbase-server in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 19s |  The patch does not generate ASF License warnings.  |
   |  |   | 202m 46s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1781 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux fb91fea02627 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1781/out/precommit/personality/provided.sh |
   | git revision | branch-1 / 9f12ef0 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/testReport/ |
   | Max. process+thread count | 4749 (vs. ulimit of 10000) |
   | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-server U: . |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1781/2/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] apurtell commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
apurtell commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430741232



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -40,7 +41,9 @@
 import org.apache.hadoop.hbase.io.hfile.CacheStats;
 import org.apache.hadoop.hbase.wal.WALProvider;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
 import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.hdfs.DFSHedgedReadMetrics;

Review comment:
       In what Hadoop version were these introduced? If at least 2.7, its definitely fine. If 2.8, probably ok. 

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
##########
@@ -592,9 +593,9 @@ public void markRegionsRecovering(ServerName server, Set<HRegionInfo> userRegion
    * @return whether log is replaying
    */
   public boolean isLogReplaying() {
-    if (server.getCoordinatedStateManager() == null) return false;
-    return ((BaseCoordinatedStateManager) server.getCoordinatedStateManager())
-        .getSplitLogManagerCoordination().isReplaying();
+    CoordinatedStateManager m = server.getCoordinatedStateManager();

Review comment:
       Unrelated changes. Remove




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430920017



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
##########
@@ -435,4 +440,145 @@ public void checkStreamCapabilitiesOnHdfsDataOutputStream() throws Exception {
     }
   }
 
+  /**
+   * Ugly test that ensures we can get at the hedged read counters in dfsclient.
+   * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread.
+   */
+  @Test public void testDFSHedgedReadMetrics() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    // Enable hedged reads and set it so the threshold is really low.
+    // Most of this test is taken from HDFS, from TestPread.
+    Configuration conf = htu.getConfiguration();
+    conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5);
+    conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0);
+    conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096);
+    conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0);
+    conf.setBoolean("dfs.datanode.transferTo.allowed", false);
+    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+    // Get the metrics.  Should be empty.
+    DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf);
+    assertEquals(0, metrics.getHedgedReadOps());
+    FileSystem fileSys = cluster.getFileSystem();
+    try {
+      Path p = new Path("preadtest.dat");
+      // We need > 1 blocks to test out the hedged reads.
+      DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize,
+        blockSize, (short) 3, seed);
+      pReadFile(fileSys, p);
+      cleanupFile(fileSys, p);
+      assertTrue(metrics.getHedgedReadOps() > 0);
+    } finally {
+      fileSys.close();
+      cluster.shutdown();
+    }
+  }
+
+  // Below is taken from TestPread over in HDFS.
+  static final int blockSize = 4096;
+  static final long seed = 0xDEADBEEFL;
+
+  private void pReadFile(FileSystem fileSys, Path name) throws IOException {
+    FSDataInputStream stm = fileSys.open(name);
+    byte[] expected = new byte[12 * blockSize];
+    Random rand = new Random(seed);
+    rand.nextBytes(expected);
+    // do a sanity check. Read first 4K bytes
+    byte[] actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 0, expected, "Read Sanity Test");
+    // now do a pread for the first 8K bytes
+    actual = new byte[8192];
+    doPread(stm, 0L, actual, 0, 8192);
+    checkAndEraseData(actual, 0, expected, "Pread Test 1");
+    // Now check to see if the normal read returns 4K-8K byte range
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 4096, expected, "Pread Test 2");
+    // Now see if we can cross a single block boundary successfully
+    // read 4K bytes from blockSize - 2K offset
+    stm.readFully(blockSize - 2048, actual, 0, 4096);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3");
+    // now see if we can cross two block boundaries successfully
+    // read blockSize + 4K bytes from blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(blockSize - 2048, actual);
+    checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4");
+    // now see if we can cross two block boundaries that are not cached
+    // read blockSize + 4K bytes from 10*blockSize - 2K offset
+    actual = new byte[blockSize + 4096];
+    stm.readFully(10 * blockSize - 2048, actual);
+    checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5");
+    // now check that even after all these preads, we can still read
+    // bytes 8K-12K
+    actual = new byte[4096];
+    stm.readFully(actual);
+    checkAndEraseData(actual, 8192, expected, "Pread Test 6");
+    // done
+    stm.close();
+    // check block location caching
+    stm = fileSys.open(name);
+    stm.readFully(1, actual, 0, 4096);
+    stm.readFully(4*blockSize, actual, 0, 4096);

Review comment:
       please pay attention to the space style problems in this unit test.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Reidddddd merged pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
Reidddddd merged pull request #1781:
URL: https://github.com/apache/hbase/pull/1781


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] saintstack commented on pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
saintstack commented on pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#issuecomment-634781563


   Re-running tests at @clarax 's request to see if above failures are flakies.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] javierluca commented on a change in pull request #1781: HBASE-24435 Add hedgedReads and hedgedReadWins count metrics

Posted by GitBox <gi...@apache.org>.
javierluca commented on a change in pull request #1781:
URL: https://github.com/apache/hbase/pull/1781#discussion_r430833826



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
##########
@@ -819,6 +832,16 @@ public long getZeroCopyBytesRead() {
     return FSDataInputStreamWrapper.getZeroCopyBytesRead();
   }
 
+  @Override
+  public long getHedgedReadOps() {
+    return this.dfsHedgedReadMetrics == null? 0 : this.dfsHedgedReadMetrics.getHedgedReadOps();

Review comment:
       sorry about that, fixing




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org