You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2021/05/27 05:04:59 UTC

[GitHub] [hbase] mymeiyi opened a new pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

mymeiyi opened a new pull request #3318:
URL: https://github.com/apache/hbase/pull/3318


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853523470






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853535772






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849513847


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  5s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  6s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 58s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  12m  3s |  hbase-server in the patch failed.  |
   |  |   |  46m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6227e03e4dcc 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/testReport/ |
   | Max. process+thread count | 646 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache9 commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849411246


   Mind explaing a bit about the fix?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r640450986



##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithByteBuff.java
##########
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.ByteBuffAllocator;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+@Category(LargeTests.class)
+public class TestCompactionWithByteBuff {

Review comment:
       This test guarantee that early release of BB happens and so possibly hit that issue?  I dont think so. May be am missing something!

##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithByteBuff.java
##########
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.ByteBuffAllocator;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+@Category(LargeTests.class)
+public class TestCompactionWithByteBuff {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+      HBaseClassTestRule.forClass(TestCompactionWithByteBuff.class);
+  @Rule
+  public TestName name = new TestName();
+
+  private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static Configuration conf = TEST_UTIL.getConfiguration();
+  private static Admin admin = null;
+
+  private static final byte[] COLUMN = Bytes.toBytes("A");
+  private static final int REGION_COUNT = 5;
+  private static final long ROW_COUNT = 200;
+  private static final int ROW_LENGTH = 20;
+  private static final int VALUE_LENGTH = 5000;
+
+  @BeforeClass
+  public static void setupBeforeClass() throws Exception {
+    conf.setBoolean(ByteBuffAllocator.ALLOCATOR_POOL_ENABLED_KEY, true);
+    conf.setInt(ByteBuffAllocator.BUFFER_SIZE_KEY, 1024 * 5);
+    conf.setInt(CompactSplit.SMALL_COMPACTION_THREADS, REGION_COUNT * 2);
+    conf.setInt(CompactSplit.LARGE_COMPACTION_THREADS, REGION_COUNT * 2);
+    conf.set(HConstants.BUCKET_CACHE_IOENGINE_KEY, "offheap");

Review comment:
       Actually Bucket Cache is coming into pic here? I think No.  We write some data and flush. 2 files are created.  And then compact that.  What we wanted is that the compaction is reading the file blocks from BC.  But cache data on write is by default false. (hbase.rs.cacheblocksonwrite).  

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,25 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {

Review comment:
       nit : Seems like only format change. Can u avoid?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849431909


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 11s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  9s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 153m 56s |  hbase-server in the patch failed.  |
   |  |   | 185m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8ab55bf9860a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a22e418cf6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/testReport/ |
   | Max. process+thread count | 3869 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853616762


   Thanks Duo.  Ya I got it.. Was looking around that lastCleanCell ref as that was initially been referred.  I got it very clear from @mymeiyi reply.  Ya change looks good. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644458410



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `for (Cell c : cells) {
     writer.append(c);
     if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
       if (lastCleanCell != null) {
         PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
         lastCleanCell = null;
       }
       ((ShipperListener) writer).beforeShipped();
       kvs.shipped();
     }
   }`
    I do not think this can fix this issus. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853699192


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 45s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 17s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  4s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 19s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 32s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate ASF License warnings.  |
   |  |   |  52m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 8093e724d6e6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 426c3c16f3 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/console |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644458410



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   for (Cell c : cells) {
       writer.append(c);
       if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
           if (lastCleanCell != null) {
               PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
               lastCleanCell = null;
           }
           ((ShipperListener) writer).beforeShipped();
           kvs.shipped();
       }
   }
   I do not think this can fix this issus. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644466713



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       yes.
   Before calling shipped (which will release BBs) we are setting back the seqId.   The beforeShipped call make sure that other layers which hold cell ref will copy those cells.  Now after reset the seqId, we make lastCleanCell  = null.  So no chance of calling again setSequenceId on same Cell.  If u have a repro, will u be able to check pls?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853660197


   > Hi, Anoop, after reviewin the code, I do not think the problem is `lastCleanCell`.
   > 
   > The problem here is that, after calling `scanner.next(cells, scannerContext)`, we then write these cells out to writer. These cells will reference HFileBlock inside the `kvs`, and after calling `kvs.shipped()`, all these cells will be invalid because we will release all the HFileBlock which are referenced by them.
   > 
   > So here we can not call `kvs.shipped` inside the loop. The `writer.beforeShipped` call will only clone the cells which have been written to the writer but haven't been flushed out by the writer. But `kvs.shipped` will also release the HFileBlock for the cells which haven't been written to the writer yet, so calling writer.beforeShipped does not help here, in the next loop round we will write an invalid cell to writer and crash the regionserver.
   > 
   > @mymeiyi Do I understand correctly?
   > 
   > Thanks.
   
   Yes, this explanation is very clear. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-850453676


   > I write a case how this error could happen in the doc: https://docs.google.com/document/d/1_3HXgOSGHsHFqLiOWUsE3m6pHKjebSwrRObcPTqkSTE/edit?usp=sharing, please have a look, thanks @Apache9 @anoopsjohn
   
   >The fix is simple, move the shipped method out of the for loop
   
   Ya the issue is very clear for me.  Actually in code, we clone the cells for which we keep the ref.   Seems missed in this place..  BTW in ur 1st patch that clone of Cell was there but removed now?  And instead did this move of shipped out of that for loop?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache9 commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853608676


   Hi, Anoop, after reviewin the code, I do not think the problem is `lastCleanCell`.
   
   The problem here is that, after calling `scanner.next(cells, scannerContext)`, we then write these cells out to writer. These cells will reference HFileBlock inside the `kvs`, and after calling `kvs.shipped()`, all these cells will be invalid because we will release all the HFileBlock which are referenced by them.
   
   So here we can not call `kvs.shipped` inside the loop. The `writer.beforeShipped` call will only clone the cells which have been written to the writer but haven't been flushed out by the writer. But `kvs.shipped` will also release the HFileBlock for the cells which haven't been written to the writer yet, so calling writer.beforeShipped does not help here, in the next loop round we will write an invalid cell to writer and crash the regionserver.
   
   @mymeiyi Do I understand correctly?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849518505


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 39s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 18s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 36s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate ASF License warnings.  |
   |  |   |  53m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 33bf26907363 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/console |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi merged pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi merged pull request #3318:
URL: https://github.com/apache/hbase/pull/3318


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853535772


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 20s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  8s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 42s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 17s |  The patch does not generate ASF License warnings.  |
   |  |   |  53m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/4/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 41ba0197825e 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 335305e0cf |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/4/console |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r640459707



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,25 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {

Review comment:
       It is not format change? Moved this code out of the for loop.

##########
File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithByteBuff.java
##########
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.io.ByteBuffAllocator;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+@Category(LargeTests.class)
+public class TestCompactionWithByteBuff {
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+      HBaseClassTestRule.forClass(TestCompactionWithByteBuff.class);
+  @Rule
+  public TestName name = new TestName();
+
+  private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static Configuration conf = TEST_UTIL.getConfiguration();
+  private static Admin admin = null;
+
+  private static final byte[] COLUMN = Bytes.toBytes("A");
+  private static final int REGION_COUNT = 5;
+  private static final long ROW_COUNT = 200;
+  private static final int ROW_LENGTH = 20;
+  private static final int VALUE_LENGTH = 5000;
+
+  @BeforeClass
+  public static void setupBeforeClass() throws Exception {
+    conf.setBoolean(ByteBuffAllocator.ALLOCATOR_POOL_ENABLED_KEY, true);
+    conf.setInt(ByteBuffAllocator.BUFFER_SIZE_KEY, 1024 * 5);
+    conf.setInt(CompactSplit.SMALL_COMPACTION_THREADS, REGION_COUNT * 2);
+    conf.setInt(CompactSplit.LARGE_COMPACTION_THREADS, REGION_COUNT * 2);
+    conf.set(HConstants.BUCKET_CACHE_IOENGINE_KEY, "offheap");

Review comment:
       It is not related to BC.
   Set to offheap to make sure the blocks are read to OffheapByteBuffer, see HFileReaderImpl#shouldUseHeap.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849548768


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 36s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 34s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 21s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate ASF License warnings.  |
   |  |   |  52m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 6011e13b8a6c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/console |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-851846293


   Is there any other problems about this issue? @Apache9 @anoopsjohn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849708684


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 11s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m 13s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 18s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 240m 57s |  hbase-server in the patch failed.  |
   |  |   | 277m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8f0dee8df4f1 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/testReport/ |
   | Max. process+thread count | 2712 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849491288


   I write a case how this error could happen in the doc: https://docs.google.com/document/d/1_3HXgOSGHsHFqLiOWUsE3m6pHKjebSwrRObcPTqkSTE/edit?usp=sharing, please have a look, thanks @Apache9 @anoopsjohn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644458410



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `for (Cell c : cells) {
               writer.append(c);
               if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
                 if (lastCleanCell != null) {
                   PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
                   lastCleanCell = null;
                 }
                 ((ShipperListener) writer).beforeShipped();
                 kvs.shipped();
               }
             }`
    I do not think this can fix this issus. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644451703



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       What I am saying is this set to null is enough to solve the problem you saw.
   You do not have to move the block outside of for loop. So  this single line change will fix it.  You have a repro. Will u be able to confirm pls?   We can avoid larger change/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849483851


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  14m 33s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 54s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 51s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  11m 35s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 10s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 205m 28s |  hbase-server in the patch failed.  |
   |  |   | 261m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4e42f9079134 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a22e418cf6 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/testReport/ |
   | Max. process+thread count | 2686 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache9 commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853608676


   Hi, Anoop, after reviewin the code, I do not think the problem is `lastCleanCell`.
   
   The problem here is that, after calling `scanner.next(cells, scannerContext)`, we then write these cells out to writer. These cells will reference HFileBlock inside the `kvs`, and after calling `kvs.shipped()`, all these cells will be invalid because we will release all the HFileBlock which are referenced by them.
   
   So here we can not call `kvs.shipped` inside the loop. The `writer.beforeShipped` call will only clone the cells which have been written to the writer but haven't been flushed out by the writer. But `kvs.shipped` will also release the HFileBlock for the cells which haven't been written to the writer yet, so calling writer.beforeShipped does not help here, in the next loop round we will write an invalid cell to writer and crash the regionserver.
   
   @mymeiyi Do I understand correctly?
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644451703



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       What I am saying is this set to null is enough to solve the problem you saw.
   You do not have to move the block outside of for loop. So  this single line change will fix it.  You have a repro. Will u be able to confirm pls?   We can avoid larger change/

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       yes.
   Before calling shipped (which will release BBs) we are setting back the seqId.   The beforeShipped call make sure that other layers which hold cell ref will copy those cells.  Now after reset the seqId, we make lastCleanCell  = null.  So no chance of calling again setSequenceId on same Cell.  If u have a repro, will u be able to check pls?

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       I see it now.. You have a point.  +1 then.

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       BTW excellent investigation work done to pin point what is the issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853581429


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 22s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 152m 14s |  hbase-server in the patch passed.  |
   |  |   | 183m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 12cbad7d7ee3 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 335305e0cf |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/4/testReport/ |
   | Max. process+thread count | 4636 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/4/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849516463


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  0s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  10m 29s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 46s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  13m 10s |  hbase-server in the patch failed.  |
   |  |   |  50m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 593c58ccff3a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/testReport/ |
   | Max. process+thread count | 650 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/2/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r640336830



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,25 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }

Review comment:
       Move this out of "for loop", because the "kvs.shipped()" will release the pre blocks in HFileReader, but some cells may have not been shipped by 'writer.append(c)' in line 441. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853616762


   Thanks Duo.  Ya I got it.. Was looking around that lastCleanCell ref as that was initially been referred.  I got it very clear from @mymeiyi reply.  Ya change looks good. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853780203


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 43s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 17s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 151m 17s |  hbase-server in the patch passed.  |
   |  |   | 182m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 3f0ca3f7c040 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 426c3c16f3 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/testReport/ |
   | Max. process+thread count | 4256 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644458410



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `        
   for (Cell c : cells) {
             writer.append(c);
             if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
               if (lastCleanCell != null) {
                 PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
                 lastCleanCell = null;
               }
               ((ShipperListener) writer).beforeShipped();
               kvs.shipped();
             }
   }
   `?
    I do not think this can fix this issus. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi merged pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi merged pull request #3318:
URL: https://github.com/apache/hbase/pull/3318


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644512740



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       I see it now.. You have a point.  +1 then.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849688976


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 44s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 33s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  10m  4s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 55s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 215m 28s |  hbase-server in the patch failed.  |
   |  |   | 252m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 6a3daed19729 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 63141bf576 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/testReport/ |
   | Max. process+thread count | 2813 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/3/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-849347563


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m  7s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 14s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 49s |  Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 15s |  The patch does not generate ASF License warnings.  |
   |  |   |  53m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/artifact/yetus-general-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux a07eb9aebbed 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / a22e418cf6 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/1/console |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644512914



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       BTW excellent investigation work done to pin point what is the issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
Apache-HBase commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853826396


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 39s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  master passed  |
   | +1 :green_heart: |  shadedjars  |  10m 37s |  branch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 46s |  patch has no errors when building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 228m 42s |  hbase-server in the patch passed.  |
   |  |   | 268m 40s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile |
   | GITHUB PR | https://github.com/apache/hbase/pull/3318 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 93428b748912 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 426c3c16f3 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/testReport/ |
   | Max. process+thread count | 3006 (vs. ulimit of 30000) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3318/5/console |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-853523470


   > So the latest version change is moving the check for possible shipped() called out of the for loop. The cons here is that when we have wide rows with so many cells in it, this will delay the process of release of blocks and so release from cache. That is why it was kept in for loop.
   
   Yes, the block release may be delayed. There is a configuration named "hbase.hstore.compaction.kv.max" and default value is 10 to limit scan cell count in compaction. But if a cell size is huge, we can not avoid the delay.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644458410



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `        
   for (Cell c : cells) {
             writer.append(c);
             if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
               if (lastCleanCell != null) {
                 PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
                 lastCleanCell = null;
               }
               ((ShipperListener) writer).beforeShipped();
               kvs.shipped();
             }
   }
   `?
    I do not think this can fix this issus. 

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `for (Cell c : cells) {
               writer.append(c);
               if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
                 if (lastCleanCell != null) {
                   PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
                   lastCleanCell = null;
                 }
                 ((ShipperListener) writer).beforeShipped();
                 kvs.shipped();
               }
             }`
    I do not think this can fix this issus. 

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   `for (Cell c : cells) {
     writer.append(c);
     if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
       if (lastCleanCell != null) {
         PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
         lastCleanCell = null;
       }
       ((ShipperListener) writer).beforeShipped();
       kvs.shipped();
     }
   }`
    I do not think this can fix this issus. 

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       Do you mean
   for (Cell c : cells) {
       writer.append(c);
       if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
           if (lastCleanCell != null) {
               PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
               lastCleanCell = null;
           }
           ((ShipperListener) writer).beforeShipped();
           kvs.shipped();
       }
   }
   I do not think this can fix this issus. 

##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       With this fix(Only set lastCleanCell = null), the UT is failed too.
   
   The main problem is not related to lastCell. 
   It is when `writer.append(c)`, the c is aleady released by previous `kvs.shipped()`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] mymeiyi commented on a change in pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
mymeiyi commented on a change in pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#discussion_r644480714



##########
File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java
##########
@@ -451,25 +451,26 @@ protected boolean performCompaction(FileDetails fd, InternalScanner scanner, Cel
             progress.cancel();
             return false;
           }
-          if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
-            if (lastCleanCell != null) {
-              // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
-              // ShipperListener will do a clone of the last cells it refer, so need to set back
-              // sequence id before ShipperListener.beforeShipped
-              PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
-            }
-            // Clone the cells that are in the writer so that they are freed of references,
-            // if they are holding any.
-            ((ShipperListener) writer).beforeShipped();
-            // The SHARED block references, being read for compaction, will be kept in prevBlocks
-            // list(See HFileScannerImpl#prevBlocks). In case of scan flow, after each set of cells
-            // being returned to client, we will call shipped() which can clear this list. Here by
-            // we are doing the similar thing. In between the compaction (after every N cells
-            // written with collective size of 'shippedCallSizeLimit') we will call shipped which
-            // may clear prevBlocks list.
-            kvs.shipped();
-            bytesWrittenProgressForShippedCall = 0;
+        }
+        if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
+          if (lastCleanCell != null) {
+            // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
+            // ShipperListener will do a clone of the last cells it refer, so need to set back
+            // sequence id before ShipperListener.beforeShipped
+            PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
+            lastCleanCell = null;

Review comment:
       With this fix(Only set lastCleanCell = null), the UT is failed too.
   
   The main problem is not related to lastCell. 
   It is when `writer.append(c)`, the c is aleady released by previous `kvs.shipped()`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [hbase] anoopsjohn commented on pull request #3318: HBASE-25929 RegionServer JVM crash when compaction

Posted by GitBox <gi...@apache.org>.
anoopsjohn commented on pull request #3318:
URL: https://github.com/apache/hbase/pull/3318#issuecomment-852002682


   So the latest version change is moving the check for possible shipped() called out of the for loop.  The cons here is that when we have wide rows with so many cells in it, this will delay the process of release of blocks and so release from cache.  That is why it was kept in for loop.
   But the bug is also a big concern.  
   Now seeing why and when we use this 'lastCleanCell'.  This is to reset the seqId on a cell.  We set that to be 0 while we pass the cell to the writer (the output writer for the new compacted file(s)).  But we try reset that value on the original cell (See details in HBASE-16931). 
   PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId)  is being done inside the for loop before doing shipped call as well as outside the for loop.    If the call is happening only within the for() loop, there is no chance of the corruption.  But now it can so happen that the call can happen on SAME cell object inside and outside of the loop.
   So if we keep the code as is (not moving the shipped call out of for loop)  and do like
   if (kvs != null && bytesWrittenProgressForShippedCall > shippedCallSizeLimit) {
               if (lastCleanCell != null) {
                 // HBASE-16931, set back sequence id to avoid affecting scan order unexpectedly.
                 // ShipperListener will do a clone of the last cells it refer, so need to set back
                 // sequence id before ShipperListener.beforeShipped
                 PrivateCellUtil.setSequenceId(lastCleanCell, lastCleanCellSeqId);
                 **lastCleanCell = null;  //  The reset of the seqId for this cell object happened already. Just nullify it**
               }
   We wont get to any issue. Correct?  Seems that will be the best way?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org