You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Jaroslav Šnajdr <js...@gmail.com> on 2016/02/24 15:46:58 UTC

Cache of region boundaries are out of date - during index creation

Hello everyone,

while creating an index on my Phoenix table:

CREATE LOCAL INDEX idx_media_next_update_at ON media
(next_metadata_update_at);


I'm getting an exception every time the command is run, after it's been
running for a while:


*Error: ERROR 1108 (XCL08): Cache of region boundaries are out of date.
(state=XCL08,code=1108)*

org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108
(XCL08): Cache of region boundaries are out of date.

at
org.apache.phoenix.exception.SQLExceptionCode$13.newException(SQLExceptionCode.java:312)

at
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)

at
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:131)

at
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)

at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)

at
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:611)

at
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)

at
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)

at
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)

at
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)

at
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)

at
org.apache.phoenix.schema.MetaDataClient$2.execute(MetaDataClient.java:1034)

at
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2132)

at
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1059)

at
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1348)

at
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)

at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)

at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:313)

at
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)

at sqlline.Commands.execute(Commands.java:822)


Any suggestions why this is happening? When the command is running, nothing
else is accessing HBase, so there should be no reason for splitting
regions. I'm using Phoenix 4.6.0-HBase-1.1.

Creating a global index (i.e., removing the LOCAL keyword) leads to the
same result. Actually, we're getting this error on many other queries that
use scans and run for some longer period of time (i.e., more that a few
seconds).

Can this be an indication that there is something wrong with the underlying
HBase cluster? Some corruption or misconfiguration? hbck reports the tables
as healthy and there is no other indication that something might be wrong.

Jarda Snajdr

Re: Cache of region boundaries are out of date - during index creation

Posted by Jaroslav Šnajdr <js...@gmail.com>.
Hello Ankit,

Thanks a lot! Cleaning the SYSTEM.STATS table solved the problem instantly.
The exception disappeared.

Jarda


On Thu, Feb 25, 2016 at 9:16 AM, Ankit Singhal <an...@gmail.com>
wrote:

> can you try after truncating the SYSTEM.STATS table or deleting records of
> parent table only from SYSTEM.STATS like below.
>
> DELETE * FROM SYSTEM.STATS WHERE PHYSICAL_NAME='media';
>
> Regards,
> Ankit Singhal
>
>
> On Wed, Feb 24, 2016 at 8:16 PM, Jaroslav Šnajdr <js...@gmail.com>
> wrote:
>
>> Hello everyone,
>>
>> while creating an index on my Phoenix table:
>>
>> CREATE LOCAL INDEX idx_media_next_update_at ON media
>> (next_metadata_update_at);
>>
>>
>> I'm getting an exception every time the command is run, after it's been
>> running for a while:
>>
>>
>> *Error: ERROR 1108 (XCL08): Cache of region boundaries are out of date.
>> (state=XCL08,code=1108)*
>>
>> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108
>> (XCL08): Cache of region boundaries are out of date.
>>
>> at
>> org.apache.phoenix.exception.SQLExceptionCode$13.newException(SQLExceptionCode.java:312)
>>
>> at
>> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>>
>> at
>> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:131)
>>
>> at
>> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
>>
>> at
>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
>>
>> at
>> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:611)
>>
>> at
>> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>>
>> at
>> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>>
>> at
>> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>>
>> at
>> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>>
>> at
>> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>>
>> at
>> org.apache.phoenix.schema.MetaDataClient$2.execute(MetaDataClient.java:1034)
>>
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2132)
>>
>> at
>> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1059)
>>
>> at
>> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1348)
>>
>> at
>> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>>
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
>>
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
>>
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>>
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:313)
>>
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
>>
>> at sqlline.Commands.execute(Commands.java:822)
>>
>>
>> Any suggestions why this is happening? When the command is running,
>> nothing else is accessing HBase, so there should be no reason for splitting
>> regions. I'm using Phoenix 4.6.0-HBase-1.1.
>>
>> Creating a global index (i.e., removing the LOCAL keyword) leads to the
>> same result. Actually, we're getting this error on many other queries that
>> use scans and run for some longer period of time (i.e., more that a few
>> seconds).
>>
>> Can this be an indication that there is something wrong with the
>> underlying HBase cluster? Some corruption or misconfiguration? hbck reports
>> the tables as healthy and there is no other indication that something might
>> be wrong.
>>
>> Jarda Snajdr
>>
>>
>>
>

Re: Cache of region boundaries are out of date - during index creation

Posted by Ankit Singhal <an...@gmail.com>.
can you try after truncating the SYSTEM.STATS table or deleting records of
parent table only from SYSTEM.STATS like below.

DELETE * FROM SYSTEM.STATS WHERE PHYSICAL_NAME='media';

Regards,
Ankit Singhal


On Wed, Feb 24, 2016 at 8:16 PM, Jaroslav Šnajdr <js...@gmail.com> wrote:

> Hello everyone,
>
> while creating an index on my Phoenix table:
>
> CREATE LOCAL INDEX idx_media_next_update_at ON media
> (next_metadata_update_at);
>
>
> I'm getting an exception every time the command is run, after it's been
> running for a while:
>
>
> *Error: ERROR 1108 (XCL08): Cache of region boundaries are out of date.
> (state=XCL08,code=1108)*
>
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108
> (XCL08): Cache of region boundaries are out of date.
>
> at
> org.apache.phoenix.exception.SQLExceptionCode$13.newException(SQLExceptionCode.java:312)
>
> at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>
> at
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:131)
>
> at
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
>
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
>
> at
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:611)
>
> at
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>
> at
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>
> at
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>
> at
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>
> at
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>
> at
> org.apache.phoenix.schema.MetaDataClient$2.execute(MetaDataClient.java:1034)
>
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2132)
>
> at
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1059)
>
> at
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1348)
>
> at
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:322)
>
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
>
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:313)
>
> at
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1435)
>
> at sqlline.Commands.execute(Commands.java:822)
>
>
> Any suggestions why this is happening? When the command is running,
> nothing else is accessing HBase, so there should be no reason for splitting
> regions. I'm using Phoenix 4.6.0-HBase-1.1.
>
> Creating a global index (i.e., removing the LOCAL keyword) leads to the
> same result. Actually, we're getting this error on many other queries that
> use scans and run for some longer period of time (i.e., more that a few
> seconds).
>
> Can this be an indication that there is something wrong with the
> underlying HBase cluster? Some corruption or misconfiguration? hbck reports
> the tables as healthy and there is no other indication that something might
> be wrong.
>
> Jarda Snajdr
>
>
>