You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/12/01 01:05:00 UTC

[jira] [Commented] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

    [ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241171#comment-17241171 ] 

ASF GitHub Bot commented on PHOENIX-5860:
-----------------------------------------

joshelser commented on a change in pull request #874:
URL: https://github.com/apache/phoenix/pull/874#discussion_r533005013



##########
File path: phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
##########
@@ -1430,4 +1430,17 @@ public InternalScanner run() throws Exception {
         }
         return s;
     }
+
+    /**
+     * roll back after split failed, will isRegionClosingOrSplitting set false,
+     * and then write region will is available
+     * @param ctx
+     * @throws IOException
+     */
+    @Override
+    public void preRollBackSplit(ObserverContext<RegionCoprocessorEnvironment> ctx) throws IOException {

Review comment:
       > HBase 2.x does not have this issue, because 2.x split failed, by SplitTableRegionProcedure rollbackState method , assign open region, so isRegionClosingOrSplitting is false.
   
   Well, I think it's more accurate to say "HBase 2.x doesn't apply because splits are executed by the master". This coprocessor would never be invoked, running inside a RegionServer.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Throw exception which region is closing or splitting when delete data
> ---------------------------------------------------------------------
>
>                 Key: PHOENIX-5860
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5860
>             Project: Phoenix
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 4.13.1, 4.x
>            Reporter: Chao Wang
>            Assignee: Chao Wang
>            Priority: Major
>             Fix For: 4.16.0
>
>         Attachments: PHOENIX-5860-4.x.001.patch, PHOENIX-5860-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server side, this class check if isRegionClosingOrSplitting is true. when isRegionClosingOrSplitting is true, will throw new IOException("Temporarily unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that isRegionClosingOrSplitting  is false.before region split, region change  isRegionClosingOrSplitting to true.but if region split failed,split will roll back where not change   isRegionClosingOrSplitting  to false. after that all write  opration will always throw exception which is Temporarily unable to write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan because region is closing or splitting Caused by: java.io.IOException: Temporarily unable to write from scan because region is closing or splitting at org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) ... 5 more
> at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) at org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) at com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)