You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by anil gupta <an...@gmail.com> on 2016/01/07 08:14:43 UTC

Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

Hi All,

I am using Phoenix4.4, i have created a global secondary in one table. I am
running MapReduce job with 20 reducers to load data into this table(maybe i
m doing 50 writes/second/reducer). Dataset  is around 500K rows only. My
mapreduce job is failing due to this exception:
Caused by: org.apache.phoenix.execute.CommitException:
java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
index metadata.  key=-413539871950113484
region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
Index update failed
    at
org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
    at
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
    at
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
    at
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
    ... 14 more

It seems like i am hitting
https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy
write or read load like wuchengzhi. I haven't dont any tweaking in
Phoenix/HBase conf yet.

What is the root cause of this error? What are the recommended changes in
conf for this?
-- 
Thanks & Regards,
Anil Gupta

Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

Posted by anil gupta <an...@gmail.com>.
Hi James,

Thanks for your reply. My problem was resolved by setting
phoenix.coprocessor.maxServerCacheTimeToLiveMs to 3 minutes and
phoenix.upsert.batch.size to 10. I think, i can increase
phoenix.upsert.batch.size to a higher value but haven't got opportunity to
try that out yet.

Thanks,
Anil Gupta


On Thu, Jan 14, 2016 at 6:28 PM, James Taylor <ja...@apache.org>
wrote:

> Hi Anil,
> This error occurs if you're performing an update that takes a long time on
> a mutable table that has a secondary index. In this case, we make an RPC
> before the update which sends index metadata to the region server which
> it'll use for the duration of the update to generate the secondary index
> rows based on the data rows. In this case, the cache entry is expiring
> before the update (i.e. your MR job) completes. Try
> increasing phoenix.coprocessor.maxServerCacheTimeToLiveMs in the region
> server hbase-site.xml. See our Tuning page[1] for more info.
>
> FWIW, 500K rows would be much faster to insert via our standard UPSERT
> statement.
>
> Thanks,
> James
> [1] https://phoenix.apache.org/tuning.html
>
> On Sun, Jan 10, 2016 at 10:18 PM, Anil Gupta <an...@gmail.com>
> wrote:
>
>> Bump..
>> Can secondary index commiters/experts provide any insight into this? This
>> is one of the feature that encouraged us to use phoenix.
>> Imo, global secondary index should be handled as a inverted index table.
>> So, i m unable to understand why its failing on region splits.
>>
>> Sent from my iPhone
>>
>> On Jan 6, 2016, at 11:14 PM, anil gupta <an...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I am using Phoenix4.4, i have created a global secondary in one table. I
>> am running MapReduce job with 20 reducers to load data into this
>> table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K
>> rows only. My mapreduce job is failing due to this exception:
>> Caused by: org.apache.phoenix.execute.CommitException:
>> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
>> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
>> index metadata.  key=-413539871950113484
>> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
>> Index update failed
>>     at
>> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
>>     at
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>>     at
>> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>     at
>> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>>     at
>> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
>>     ... 14 more
>>
>> It seems like i am hitting
>> https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have
>> heavy write or read load like wuchengzhi. I haven't dont any tweaking in
>> Phoenix/HBase conf yet.
>>
>> What is the root cause of this error? What are the recommended changes in
>> conf for this?
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>>
>


-- 
Thanks & Regards,
Anil Gupta

Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

Posted by James Taylor <ja...@apache.org>.
Hi Anil,
This error occurs if you're performing an update that takes a long time on
a mutable table that has a secondary index. In this case, we make an RPC
before the update which sends index metadata to the region server which
it'll use for the duration of the update to generate the secondary index
rows based on the data rows. In this case, the cache entry is expiring
before the update (i.e. your MR job) completes. Try
increasing phoenix.coprocessor.maxServerCacheTimeToLiveMs in the region
server hbase-site.xml. See our Tuning page[1] for more info.

FWIW, 500K rows would be much faster to insert via our standard UPSERT
statement.

Thanks,
James
[1] https://phoenix.apache.org/tuning.html

On Sun, Jan 10, 2016 at 10:18 PM, Anil Gupta <an...@gmail.com> wrote:

> Bump..
> Can secondary index commiters/experts provide any insight into this? This
> is one of the feature that encouraged us to use phoenix.
> Imo, global secondary index should be handled as a inverted index table.
> So, i m unable to understand why its failing on region splits.
>
> Sent from my iPhone
>
> On Jan 6, 2016, at 11:14 PM, anil gupta <an...@gmail.com> wrote:
>
> Hi All,
>
> I am using Phoenix4.4, i have created a global secondary in one table. I
> am running MapReduce job with 20 reducers to load data into this
> table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K
> rows only. My mapreduce job is failing due to this exception:
> Caused by: org.apache.phoenix.execute.CommitException:
> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached
> index metadata.  key=-413539871950113484
> region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d.
> Index update failed
>     at
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
>     at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>     at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>     at
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
>     ... 14 more
>
> It seems like i am hitting
> https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy
> write or read load like wuchengzhi. I haven't dont any tweaking in
> Phoenix/HBase conf yet.
>
> What is the root cause of this error? What are the recommended changes in
> conf for this?
> --
> Thanks & Regards,
> Anil Gupta
>
>

Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

Posted by Anil Gupta <an...@gmail.com>.
Bump.. 
Can secondary index commiters/experts provide any insight into this? This is one of the feature that encouraged us to use phoenix.
Imo, global secondary index should be handled as a inverted index table. So, i m unable to understand why its failing on region splits.

Sent from my iPhone

> On Jan 6, 2016, at 11:14 PM, anil gupta <an...@gmail.com> wrote:
> 
> Hi All,
> 
> I am using Phoenix4.4, i have created a global secondary in one table. I am running MapReduce job with 20 reducers to load data into this table(maybe i m doing 50 writes/second/reducer). Dataset  is around 500K rows only. My mapreduce job is failing due to this exception:
> Caused by: org.apache.phoenix.execute.CommitException: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata.  key=-413539871950113484 region=BI.TABLE,\x80M*\xBFr\xFF\x05\x1DW\x9A`\x00\x19\x0C\xC0\x00X8,1452147216490.83086e8ff78b30f6e6c49e2deba71d6d. Index update failed
>     at org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
>     at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459)
>     at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456)
>     at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:84)
>     ... 14 more
> 
> It seems like i am hitting https://issues.apache.org/jira/browse/PHOENIX-1718, but i dont have heavy write or read load like wuchengzhi. I haven't dont any tweaking in Phoenix/HBase conf yet.
> 
> What is the root cause of this error? What are the recommended changes in conf for this? 
> -- 
> Thanks & Regards,
> Anil Gupta