You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2020/07/18 03:02:26 UTC

[GitHub] [hbase] cuibo01 opened a new pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

cuibo01 opened a new pull request #2084:
URL: https://github.com/apache/hbase/pull/2084


   …zation, leading to assignment hanging in region-dense clusters


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] cuibo01 commented on a change in pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
cuibo01 commented on a change in pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#discussion_r456984183



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -846,10 +849,37 @@ private void finishActiveMasterInitialization(MonitoredTask status)
     if (isStopped()) return;
 
     status.setStatus("Submitting log splitting work for previously failed region servers");
+
+    // grab the list of procedures once. SCP fom pre-crash should all be loaded, and can't progress
+    // until AM joins the cluster any SCPs that got added after we get the log folder list should be
+    // for a different start code.
+    final Set<ServerName> alreadyHasSCP = new HashSet<>();
+    long scpCount = 0;
+    for (ProcedureInfo procInfo : this.procedureExecutor.listProcedures() ) {
+      final Procedure proc = this.procedureExecutor.getProcedure(procInfo.getProcId());
+      if (proc != null) {
+        if (proc instanceof ServerCrashProcedure && !(proc.isFinished() || proc.isSuccess())) {
+          scpCount++;
+          alreadyHasSCP.add(((ServerCrashProcedure)proc).getServerName());
+        }
+      }
+    }
+    LOG.info("Restored proceduces include " + scpCount + " SCP covering " + alreadyHasSCP.size() +
+        " ServerName.");
+    
+ 
+    LOG.info("Checking " + previouslyFailedServers.size() + " previously failed servers (seen via wals) for existing SCP.");
+    // AM should be in "not yet init" and these should all be queued
     // Master has recovered hbase:meta region server and we put
     // other failed region servers in a queue to be handled later by SSH
     for (ServerName tmpServer : previouslyFailedServers) {
-      this.serverManager.processDeadServer(tmpServer, true);
+      if (alreadyHasSCP.contains(tmpServer)) {
+        LOG.info("Skipping failed server in FS because it already has a queued SCP: " + tmpServer);
+        this.serverManager.getDeadServers().add(tmpServer);

Review comment:
       > does a queued SCP imply that the server should already be in the dead servers list? Or do we only add servers to that when we create an SCP and not when we recover them?
   
   We need to tell the master which scp has been included in the procStore and avoid scp being recreated




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] cuibo01 commented on a change in pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
cuibo01 commented on a change in pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#discussion_r460476970



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java
##########
@@ -426,6 +439,9 @@ private void prepareLogReplay(final MasterProcedureEnv env, final Set<HRegionInf
     MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem();
     AssignmentManager am = env.getMasterServices().getAssignmentManager();
     mfs.prepareLogReplay(this.serverName, regions);
+    // If the master doesn't fail, we'll set this again in SERVER_CRASH_ASSIGN
+    // can we skip doing it here? depends on how fast another observer
+    // needs to see that things were processed since we yield between now and then.

Review comment:
       the comment from your patch...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] busbey commented on a change in pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
busbey commented on a change in pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#discussion_r456858152



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -846,10 +849,37 @@ private void finishActiveMasterInitialization(MonitoredTask status)
     if (isStopped()) return;
 
     status.setStatus("Submitting log splitting work for previously failed region servers");
+
+    // grab the list of procedures once. SCP fom pre-crash should all be loaded, and can't progress
+    // until AM joins the cluster any SCPs that got added after we get the log folder list should be
+    // for a different start code.
+    final Set<ServerName> alreadyHasSCP = new HashSet<>();
+    long scpCount = 0;
+    for (ProcedureInfo procInfo : this.procedureExecutor.listProcedures() ) {
+      final Procedure proc = this.procedureExecutor.getProcedure(procInfo.getProcId());
+      if (proc != null) {
+        if (proc instanceof ServerCrashProcedure && !(proc.isFinished() || proc.isSuccess())) {
+          scpCount++;
+          alreadyHasSCP.add(((ServerCrashProcedure)proc).getServerName());
+        }
+      }
+    }
+    LOG.info("Restored proceduces include " + scpCount + " SCP covering " + alreadyHasSCP.size() +
+        " ServerName.");
+    
+ 
+    LOG.info("Checking " + previouslyFailedServers.size() + " previously failed servers (seen via wals) for existing SCP.");
+    // AM should be in "not yet init" and these should all be queued
     // Master has recovered hbase:meta region server and we put
     // other failed region servers in a queue to be handled later by SSH
     for (ServerName tmpServer : previouslyFailedServers) {
-      this.serverManager.processDeadServer(tmpServer, true);
+      if (alreadyHasSCP.contains(tmpServer)) {
+        LOG.info("Skipping failed server in FS because it already has a queued SCP: " + tmpServer);
+        this.serverManager.getDeadServers().add(tmpServer);

Review comment:
       does a queued SCP imply that the server should already be in the dead servers list? Or do we only add servers to that when we create an SCP and not when we recover them?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -846,10 +849,37 @@ private void finishActiveMasterInitialization(MonitoredTask status)
     if (isStopped()) return;
 
     status.setStatus("Submitting log splitting work for previously failed region servers");
+
+    // grab the list of procedures once. SCP fom pre-crash should all be loaded, and can't progress
+    // until AM joins the cluster any SCPs that got added after we get the log folder list should be
+    // for a different start code.
+    final Set<ServerName> alreadyHasSCP = new HashSet<>();
+    long scpCount = 0;
+    for (ProcedureInfo procInfo : this.procedureExecutor.listProcedures() ) {
+      final Procedure proc = this.procedureExecutor.getProcedure(procInfo.getProcId());
+      if (proc != null) {
+        if (proc instanceof ServerCrashProcedure && !(proc.isFinished() || proc.isSuccess())) {
+          scpCount++;
+          alreadyHasSCP.add(((ServerCrashProcedure)proc).getServerName());
+        }
+      }
+    }
+    LOG.info("Restored proceduces include " + scpCount + " SCP covering " + alreadyHasSCP.size() +
+        " ServerName.");
+    
+ 
+    LOG.info("Checking " + previouslyFailedServers.size() + " previously failed servers (seen via wals) for existing SCP.");
+    // AM should be in "not yet init" and these should all be queued
     // Master has recovered hbase:meta region server and we put
     // other failed region servers in a queue to be handled later by SSH
     for (ServerName tmpServer : previouslyFailedServers) {
-      this.serverManager.processDeadServer(tmpServer, true);
+      if (alreadyHasSCP.contains(tmpServer)) {
+        LOG.info("Skipping failed server in FS because it already has a queued SCP: " + tmpServer);
+        this.serverManager.getDeadServers().add(tmpServer);

Review comment:
       this looks like what's different from my old patch, is that right? have I missed anything else?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
##########
@@ -681,11 +681,15 @@ public synchronized void processDeadServer(final ServerName serverName, boolean
     // the handler threads and meta table could not be re-assigned in case
     // the corresponding server is down. So we queue them up here instead.
     if (!services.getAssignmentManager().isFailoverCleanupDone()) {
+      LOG.debug("AssignmentManager isn't done cleaning up from failover. Requeue server " + serverName);
       requeuedDeadServers.put(serverName, shouldSplitWal);
       return;
     }
 
+    // we don't chck if deadservers already included?
+    // when a server is already in the dead server list (including start code) do we need to schedule an SCP?

Review comment:
       these comments should be removed.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
##########
@@ -681,11 +681,15 @@ public synchronized void processDeadServer(final ServerName serverName, boolean
     // the handler threads and meta table could not be re-assigned in case
     // the corresponding server is down. So we queue them up here instead.
     if (!services.getAssignmentManager().isFailoverCleanupDone()) {
+      LOG.debug("AssignmentManager isn't done cleaning up from failover. Requeue server " + serverName);
       requeuedDeadServers.put(serverName, shouldSplitWal);
       return;
     }
 
+    // we don't chck if deadservers already included?
+    // when a server is already in the dead server list (including start code) do we need to schedule an SCP?
     this.deadservers.add(serverName);
+    // scheduled an SCP means AM must be going?

Review comment:
       this one should be removed too

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -860,6 +890,8 @@ private void finishActiveMasterInitialization(MonitoredTask status)
 
     // Fix up assignment manager status
     status.setStatus("Starting assignment manager");
+    // die somewhere in here for SCP flood I think.

Review comment:
       we can leave out the speculative comments now?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -2733,6 +2766,7 @@ public boolean isServerCrashProcessingEnabled() {
     return serverCrashProcessingEnabled.isReady();
   }
 
+  // XXX what

Review comment:
       lol. I still find this curious, but I do not think we need the comment now.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java
##########
@@ -426,6 +439,9 @@ private void prepareLogReplay(final MasterProcedureEnv env, final Set<HRegionInf
     MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem();
     AssignmentManager am = env.getMasterServices().getAssignmentManager();
     mfs.prepareLogReplay(this.serverName, regions);
+    // If the master doesn't fail, we'll set this again in SERVER_CRASH_ASSIGN
+    // can we skip doing it here? depends on how fast another observer
+    // needs to see that things were processed since we yield between now and then.

Review comment:
       what's the word on these two cases? could we just set the log split state in SERVER_CRASH_ASSIGN? who else looks at wether or not the logs were processed?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] cuibo01 commented on a change in pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
cuibo01 commented on a change in pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#discussion_r456983850



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -846,10 +849,37 @@ private void finishActiveMasterInitialization(MonitoredTask status)
     if (isStopped()) return;
 
     status.setStatus("Submitting log splitting work for previously failed region servers");
+
+    // grab the list of procedures once. SCP fom pre-crash should all be loaded, and can't progress
+    // until AM joins the cluster any SCPs that got added after we get the log folder list should be
+    // for a different start code.
+    final Set<ServerName> alreadyHasSCP = new HashSet<>();
+    long scpCount = 0;
+    for (ProcedureInfo procInfo : this.procedureExecutor.listProcedures() ) {
+      final Procedure proc = this.procedureExecutor.getProcedure(procInfo.getProcId());
+      if (proc != null) {
+        if (proc instanceof ServerCrashProcedure && !(proc.isFinished() || proc.isSuccess())) {
+          scpCount++;
+          alreadyHasSCP.add(((ServerCrashProcedure)proc).getServerName());
+        }
+      }
+    }
+    LOG.info("Restored proceduces include " + scpCount + " SCP covering " + alreadyHasSCP.size() +
+        " ServerName.");
+    
+ 
+    LOG.info("Checking " + previouslyFailedServers.size() + " previously failed servers (seen via wals) for existing SCP.");
+    // AM should be in "not yet init" and these should all be queued
     // Master has recovered hbase:meta region server and we put
     // other failed region servers in a queue to be handled later by SSH
     for (ServerName tmpServer : previouslyFailedServers) {
-      this.serverManager.processDeadServer(tmpServer, true);
+      if (alreadyHasSCP.contains(tmpServer)) {
+        LOG.info("Skipping failed server in FS because it already has a queued SCP: " + tmpServer);
+        this.serverManager.getDeadServers().add(tmpServer);

Review comment:
       > this looks like what's different from my old patch, is that right? have I missed anything else?
   
   yeah , different your old patch




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#issuecomment-663954943


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  11m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ branch-1.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   9m 14s |  branch-1.4 passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  branch-1.4 passed with JDK v1.8.0_262  |
   | +1 :green_heart: |  compile  |   0m 45s |  branch-1.4 passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  checkstyle  |   2m  0s |  branch-1.4 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  4s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  branch-1.4 passed with JDK v1.8.0_262  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  branch-1.4 passed with JDK v1.7.0_272  |
   | +0 :ok: |  spotbugs  |   3m 10s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  branch-1.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  the patch passed with JDK v1.8.0_262  |
   | +1 :green_heart: |  javac  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  the patch passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  javac  |   0m 47s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 53s |  hbase-server: The patch generated 15 new + 370 unchanged - 0 fixed = 385 total (was 370)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedjars  |   3m  1s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   2m 23s |  Patch does not cause any errors with Hadoop 2.7.7.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  the patch passed with JDK v1.8.0_262  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 143m 41s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate ASF License warnings.  |
   |  |   | 195m 22s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/2084 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 32ed77f8dd25 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-2084/out/precommit/personality/provided.sh |
   | git revision | branch-1.4 / 7ff737d |
   | Default Java | 1.7.0_272 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_262 /usr/lib/jvm/zulu-7-amd64:1.7.0_272 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/artifact/out/diff-checkstyle-hbase-server.txt |
   | whitespace | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/artifact/out/whitespace-eol.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/artifact/out/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/testReport/ |
   | Max. process+thread count | 4159 (vs. ulimit of 10000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/2/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #2084: HBASE-22263 Master creates duplicate ServerCrashProcedure on initiali…

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #2084:
URL: https://github.com/apache/hbase/pull/2084#issuecomment-660433985


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  11m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   ||| _ branch-1.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   9m  3s |  branch-1.4 passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  branch-1.4 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  compile  |   0m 45s |  branch-1.4 passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  checkstyle  |   2m  3s |  branch-1.4 passed  |
   | +1 :green_heart: |  shadedjars  |   3m  8s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  branch-1.4 passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  branch-1.4 passed with JDK v1.7.0_272  |
   | +0 :ok: |  spotbugs  |   3m 10s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  branch-1.4 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javac  |   0m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  javac  |   0m 46s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 55s |  hbase-server: The patch generated 16 new + 637 unchanged - 0 fixed = 653 total (was 637)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedjars  |   3m  1s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   2m 26s |  Patch does not cause any errors with Hadoop 2.7.7.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed with JDK v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed with JDK v1.7.0_272  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 131m 10s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate ASF License warnings.  |
   |  |   | 182m 16s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hbase.replication.TestReplicationSmallTests |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | Client=19.03.12 Server=19.03.12 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/2084 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 53e0f05117f4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-2084/out/precommit/personality/provided.sh |
   | git revision | branch-1.4 / 7ff737d |
   | Default Java | 1.7.0_272 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_272 |
   | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/artifact/out/diff-checkstyle-hbase-server.txt |
   | whitespace | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/artifact/out/whitespace-eol.txt |
   | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/artifact/out/patch-unit-hbase-server.txt |
   |  Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/testReport/ |
   | Max. process+thread count | 3757 (vs. ulimit of 10000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-2084/1/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org