You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Jeffrey Zhong (JIRA)" <ji...@apache.org> on 2014/05/29 01:58:01 UTC
[jira] [Commented] (PHOENIX-1005) upsert data error after drop
index
[ https://issues.apache.org/jira/browse/PHOENIX-1005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14011848#comment-14011848 ]
Jeffrey Zhong commented on PHOENIX-1005:
----------------------------------------
[~futureage] You're right. The cache data of main table has the index doesn't remove its reference to the deleted index table. Let me create a patch if you don't have one yet.
> upsert data error after drop index
> ----------------------------------
>
> Key: PHOENIX-1005
> URL: https://issues.apache.org/jira/browse/PHOENIX-1005
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 3.0.0
> Reporter: mumu
> Assignee: Jeffrey Zhong
>
> one table (T) has a index table (IDXT), when i drop the IDXT, and continue to upsert data into T, there will caught error that can not update index IDXT, and then phoenix shut the regionserve down.
> phoenix client use cache to save tables's information, i think the bug is because of the cache is not update after drop index table.
> there is some log:
> 2014-05-23 11:13:48,270 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
> org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: IDXT, row=IDXT,,99999999999999
> at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:151)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1060)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1122)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1002)
> at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:959)
> at org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:39)
> at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
> at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:243)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment$HTableWrapper.<init>(CoprocessorHost.java:370)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:696)
> at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:685)
> at org.apache.phoenix.hbase.index.table.CoprocessorHTableFactory.getTable(CoprocessorHTableFactory.java:61)
> at org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:99)
> at org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:154)
> at org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:139)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> 2014-05-23 11:13:48,325 ERROR org.apache.phoenix.hbase.index.parallel.BaseTaskRunner: Found a failed task because: org.apache.phoenix.hbase.index.exception.SingleIndexWriteFailureException: IDXT
> ......
> ERROR org.apache.phoenix.hbase.index.write.KillServerOnFailurePolicy: Could not update the index table, killing server region because couldn't write to an index table
--
This message was sent by Atlassian JIRA
(v6.2#6252)