You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2020/05/11 10:43:42 UTC

[GitHub] [hbase] virajjasani opened a new pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

virajjasani opened a new pull request #1690:
URL: https://github.com/apache/hbase/pull/1690


   …wnBeforeStartingAnyRegionServer


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#issuecomment-626634557


   Considering work done as part of HBASE-23808, HBASE-24052 and previous patch of HBASE-24327, resorting to checking if masters can't be contacted. After all, what we do want is master to be initialized and also not become active, but there can be race condition with clusterTracker initialization / ZK connection leak. FWIW, checking if master can't be contacted should not be a problem specifically when we know the flaky connection issue is hard to repro locally and the test fails only a few times.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r424880212



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       Thanks @ndimiduk 
   Let me put the code reference of the link that I mentioned. This is what I had in my first commit:
   
   ```
             } catch (IOException|CompletionException e) {
               LOG.error("Failed to establish connection.");
               if (connectionFailedWithMaster(e)) {
                 return true;
               }
             }
   ```
   
   ```
     private boolean connectionFailedWithMaster(Exception e) {
       if (e instanceof RetriesExhaustedException) {
         Throwable cause = e.getCause();
         if (cause instanceof MasterRegistryFetchException) {
           cause = cause.getCause();
           if (cause instanceof RetriesExhaustedException) {
             final String message = cause.getMessage();
             return message != null && message
               .startsWith("Failed contacting masters after 1 attempts");
           }
         }
       }
       return false;
     }
   ```
   
   `return true` in the first code block indicates returning true to waitFor().




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r424880212



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       Thanks @ndimiduk 
   Let me put the code reference of the link that I mentioned. This is what I had in my first commit:
   
   ```
             } catch (IOException|CompletionException e) {
               LOG.error("Failed to establish connection.");
               if (connectionFailedWithMaster(e)) {
                 return true;
               }
             }
   ```
   
   ```
     private boolean connectionFailedWithMaster(Exception e) {
       if (e instanceof RetriesExhaustedException) {
         Throwable cause = e.getCause();
         if (cause instanceof MasterRegistryFetchException) {
           cause = cause.getCause();
           if (cause instanceof RetriesExhaustedException) {
             final String message = cause.getMessage();
             return message != null && message
               .startsWith("Failed contacting masters after 1 attempts");
           }
         }
       }
       return false;
     }
   ```
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] bharathv commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
bharathv commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r425448865



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       I think the MasterRegistryFetchException is only valid if we re-try shutdown() rpcs. First shutdown() throws an IOException (some subclass of it) and shuts down the master, the second (and later) shutdowns can't find a master to issue a shutdown and the stub fails and throws this MasterRegistryFetchException. 
   
   Now that we decided to only do a single shutdown, we should either see the specific exception (derivate of IOException when shutdown failed) or no exception at all (if shutdown is a success, which is 99% of the test runs). I think we are good if we catch and log the first exception (don't remember the exact checked exception type).
   
   The reason I suggested to keep the checks simple and catch the exception and log is because, if there was any issue in the RPC, that'd anyway reflect in the failure of shutdown join thread and the test fails and we have the exception logged (we don't need to go fancy on checking exception.getCause() recursively).
   
   I think what Nick is suggesting is to catch the specific subclass of IOException that is thrown if shutdown() fails.  If we do that, instead of logging the exception and test failing in masterThread.join(), (any other) exception is propagated to the test runner and test fails. Either way the test is fails and we will have the exception, so I'm fine with the approach. @ndimiduk Correct me if I got you wrong..




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r424883373



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       @ndimiduk @bharathv 
   I will remove `CompletionException` from catch list. And what about IOException? Good to follow above code to find out the root cause message and accordingly handle it?
   This is what I had in my first commit and then Bharath and I came to conclusion that it's bit hacky and why not just handle IOException and not worry about cause. But now that I again think of it, you are right, although there are less chances of having different root cause for IOException but still it's good to be specific, that way not too hacky right? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#issuecomment-626663667


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 28s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate ASF License warnings.  |
   |  |   |  37m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1690/2/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/1690 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle |
   | uname | Linux c4f5041e05bd 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 6729cafa1d |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1690/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r425700122



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       Sounds good, but we can't catch the root cause Exception directly, which in our case is ConnectionClosedException. So we catch RetriesExhaustedException first and look for root cause:
   
   ```
         } catch (RetriesExhaustedException e) {
           if (e.getCause() instanceof ConnectionClosedException) {
             LOG.info("Connection is Closed, ZK is not reachable.", e);
           } else {
             throw e;
           }
         }
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] bharathv commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
bharathv commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r425448865



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       I think the MasterRegistryFetchException is only valid if we re-try shutdown() rpcs. First shutdown() throws an IOException (some subclass of it) and shuts down the master, the second (and later) shutdowns can't find a master to issue a shutdown and the stub fails and throws this MasterRegistryFetchException. 
   
   Now that we decided to only do a single shutdown, we should either see the specific exception (derivate of IOException when shutdown failed) or no exception at all (if shutdown is a success, which is 99% of the test runs). I think we are good if we catch and log the first exception (don't remember the exact checked exception type).
   
   The reason I suggested to keep the checks simple and catch the exception and log is because, if there was any issue in the RPC, that'd anyway reflect in the failure of shutdown join thread and the test fails and we have the exception logged (we don't need to go fancy on checking exception.getCause() recursively).
   
   I think what Nick is suggesting is to catch the specific subclass of IOException that is thrown if shutdown() fails.  If we do that, instead of logging the exception and test failing in masterThread.join(), the exception is propagated and to the test runner and test fails. Either way the test is fails, so I'm fine with the approach. @ndimiduk Correct me if I got you wrong..




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] bharathv commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
bharathv commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r425448865



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       I think the MasterRegistryFetchException is only valid if we re-try shutdown() rpcs. First shutdown() throws an IOException (some subclass of it) and shuts down the master, the second (and later) shutdowns can't find a master to issue a shutdown and the stub fails and throws this MasterRegistryFetchException. 
   
   Now that we decided to only do a single shutdown, we should either see the specific exception (derivate of IOException when shutdown failed) or no exception at all (if shutdown is a success, which is 99% of the test runs). I think we are good if we catch and log the first exception (don't remember the exact checked exception type).
   
   The reason I suggested to keep the checks simple and catch the exception and log is because, if there was any issue in the RPC, that'd anyway reflect in the failure of shutdown join thread and the test fails and we have the exception logged (we don't need to go fancy on checking exception.getCause() recursively).
   
   I think what Nick is suggesting is to catch the specific subclass of IOException that is thrown if shutdown() fails.  @ndimiduk Correct me if I got you wrong..




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] virajjasani commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
virajjasani commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r425700122



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       Sounds good, but we can't catch the root cause Exception directly, which in our case is ConnectionClosedException. So we catch RetriesExhaustedException first and look for root cause:
   
   ```
         } catch (RetriesExhaustedException e) {
           if (e.getCause() instanceof ConnectionClosedException) {
             LOG.info("Connection is Closed, ZK is not reachable.", e);
           }
         }
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] ndimiduk commented on a change in pull request #1690: HBASE-24327 : Flaky connection in TestMasterShutdown#testMasterShutdo…

Posted by GitBox <gi...@apache.org>.
ndimiduk commented on a change in pull request #1690:
URL: https://github.com/apache/hbase/pull/1690#discussion_r424789318



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {

Review comment:
       Ah, and that one smells a bit like HBASE-23836.

##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
##########
@@ -151,19 +152,46 @@ public void testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
       hbaseCluster = new LocalHBaseCluster(htu.getConfiguration(), options.getNumMasters(),
         options.getNumRegionServers(), options.getMasterClass(), options.getRsClass());
       final MasterThread masterThread = hbaseCluster.getMasters().get(0);
+
       masterThread.start();
-      // Switching to master registry exacerbated a race in the master bootstrap that can result
-      // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
-      // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
-      // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
-      // is because the connection creation with ZK registry is so slow that by then the server
-      // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
-      // wait() in the test, waiting for the server manager to become available.
-      final long timeout = TimeUnit.MINUTES.toMillis(10);
-      assertNotEquals("Timeout waiting for server manager to become available.",
-        -1, Waiter.waitFor(htu.getConfiguration(), timeout,
-          () -> masterThread.getMaster().getServerManager() != null));
-      htu.getConnection().getAdmin().shutdown();
+      final CompletableFuture<Void> shutdownFuture = CompletableFuture.runAsync(() -> {
+        // Switching to master registry exacerbated a race in the master bootstrap that can result
+        // in a lost shutdown command (HBASE-8422, HBASE-23836). The race is essentially because
+        // the server manager in HMaster is not initialized by the time shutdown() RPC (below) is
+        // made to the master. The suspected reason as to why it was uncommon before HBASE-18095
+        // is because the connection creation with ZK registry is so slow that by then the server
+        // manager is usually init'ed in time for the RPC to be made. For now, adding an explicit
+        // wait() in the test, waiting for the server manager to become available.
+        final long timeout = TimeUnit.MINUTES.toMillis(10);
+        assertNotEquals("timeout waiting for server manager to become available.", -1,
+          htu.waitFor(timeout, () -> masterThread.getMaster().getServerManager() != null));
+
+        // Master has come up far enough that we can terminate it without creating a zombie.
+        LOG.debug("Attempting to establish connection.");
+        try {
+          // HBASE-24327 : (Resolve Flaky connection issues)
+          // shutdown() RPC can have flaky ZK connection issues.
+          // e.g
+          // ERROR [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=53033]
+          // master.HMaster(2878): ZooKeeper exception trying to set cluster as down in ZK
+          // org.apache.zookeeper.KeeperException$SystemErrorException:
+          // KeeperErrorCode = SystemError
+          //
+          // However, even when above flakes happen, shutdown call does get completed even if
+          // RPC call has failure. Hence, subsequent retries will never succeed as HMaster is
+          // already shutdown. Hence, it can fail. To resolve it, after making one shutdown()
+          // call, we are ignoring IOException.
+          htu.getConnection().getAdmin().shutdown();
+          LOG.info("Shutdown RPC sent.");
+        } catch (IOException | CompletionException e) {
+          LOG.warn("Failed to establish connection.", e);

Review comment:
       Thanks for the links to the previous commits. GH makes it difficult for me to follow previous conversation.
   
   Okay, so if you move this code out of the `CompletableFuture`, the `CompletionException` can just be re-thrown. It indicates a client-side problem, and there's nothing the test should try to recover.
   
   I would not catch `IOException` either, just let it bubble up. Instead, I would focus on meaningful subclasses of `IOException`. `RetriesExhaustedException` is a good place to work from: you know your client made some effort. There's potentially multiple cause instances under there, so i guess just pick one to work with, probably the last one. If it's a descendent of `java.net.SocketException`, you know the client couldn't get anywhere -- how should the test behave if the the master is not running?
   
   After that, I'm not really sure what's thrown in what cases. Our API's checked exception definitions aren't strong enough for me know by reading the interfaces. However, pretty much anything else means the client managed to get the RPC over to the server. I think that fact alone is enough to consider this part of the test has succeeded at its goal. Until the myriad other issues in master startup and shutdown are resolved, I think this is the best the client can hope for (for what it's worth, I think this test will continue to be flakey until those master-side problems are solved, and they cannot be resolved perfectly by client-side gymnastics).
   
   You've looked at it, and seen the errors and test failures, more recently than I have, so what do you think? What brought you to this ticket in the first place? Are there more specific subclasses of `IOException` that are thrown, which you can use to reasonably address that specific condition? Paste the stack traces into Jira and the commit message so we can follow your effort.
   
   And thank you for your effort -- test fixing is usually thankless drudgery but it makes all of our lives better :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org