You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Mike Prendergast <mi...@iotait.com> on 2017/07/22 00:00:19 UTC

hash cache errors

I am connecting to an EMR 5.6 cluster running Phoenix 4.9 using the Phoenix
JDBC thick client, and getting these errors consistently. Can somebody
point me in the right direction as to what the issue might be?

org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
Could not find hash cache for joinId: i�K���
                                                                   �. The
cache might have expired and have been removed.
at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
HashJoinRegionScanner.java:102)
at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
ry.getRegionScanner(NonAggregateRegionScannerFactory.java:148)
at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(
ScanRegionObserver.java:72)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.
scan(RSRpcServices.java:2633)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.
scan(RSRpcServices.java:2837)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.
callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
Could not find hash cache for joinId: i�K���
                             �. The cache might have expired and have been
removed.
at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
HashJoinRegionScanner.java:102)
at org.apache.phoenix.iterate.NonAggregateRegionScannerFacto
ry.getRegionScanner(NonAggregateRegionScannerFactory.java:148)
at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(
ScanRegionObserver.java:72)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$
RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.
scan(RSRpcServices.java:2633)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.
scan(RSRpcServices.java:2837)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.
callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

at org.apache.phoenix.util.ServerUtil.parseServerException(
ServerUtil.java:116)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
BaseResultIterators.java:875)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
BaseResultIterators.java:819)
at org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(
RoundRobinResultIterator.java:176)
at org.apache.phoenix.iterate.RoundRobinResultIterator.next(
RoundRobinResultIterator.java:91)
at org.apache.phoenix.iterate.DelegateResultIterator.next(
DelegateResultIterator.java:44)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)


Michael Prendergast
*iota IT*
Vice President / Software Engineer
(cell)     703.594.1053
(office)  571.386.4682
(fax)     571.386.4681

This e-mail and any attachments to it are intended only for the identified
recipient(s). It may contain proprietary or otherwise legally protected
information of Iota IT, Inc. Any unauthorized distribution, use or
disclosure of this communication is strictly prohibited. If you have
received this communication in error, please notify the sender and delete
or otherwise destroy the e-mail and all attachments immediately.

Re: hash cache errors

Posted by Sergey Soldatov <se...@gmail.com>.
Well, PHOENIX-4010 should not happen often. If your tables have more
regions than number of region servers you may use HBase load balancer by
tables. In that case all region servers will have some regions for each
table, so there will be no chance that region moved to the RS without hash.
To confirm the issue you need to check MS log at the time when the problem
happened and see whether any region was moved. Also check the execution
time for the query for (1) reason.

Thanks,
Sergey

On Wed, Jul 26, 2017 at 4:42 PM, Mike Prendergast <
mikeprendergast@iotait.com> wrote:

> I think https://issues.apache.org/jira/browse/PHOENIX-4010 may be the
> issue for us, is there a way I can confirm that is the case? Can I force a
> region server to update its join cache in some way, as a workaround?
>
> Michael Prendergast
> *iota IT*
> Vice President / Software Engineer
> (cell)     703.594.1053 <(703)%20594-1053>
> (office)  571.386.4682 <(571)%20386-4682>
> (fax)     571.386.4681 <(571)%20386-4681>
>
> This e-mail and any attachments to it are intended only for the identified
> recipient(s). It may contain proprietary or otherwise legally protected
> information of Iota IT, Inc. Any unauthorized distribution, use or
> disclosure of this communication is strictly prohibited. If you have
> received this communication in error, please notify the sender and delete
> or otherwise destroy the e-mail and all attachments immediately.
>
> On Fri, Jul 21, 2017 at 8:38 PM, Sergey Soldatov <sergeysoldatov@gmail.com
> > wrote:
>
>> Hi Mike,
>>
>> There are a couple reasons why it may happen:
>> 1. server side cache expired. Time to live can be changed by
>> phoenix.coprocessor.maxServerCacheTimeToLiveMs
>> 2. Region has been moved to another region server where the join cache is
>> missing. Look at https://issues.apache.org/jira/browse/PHOENIX-4010
>>
>> Thanks,
>> Sergey
>>
>> On Fri, Jul 21, 2017 at 5:00 PM, Mike Prendergast <
>> mikeprendergast@iotait.com> wrote:
>>
>>> I am connecting to an EMR 5.6 cluster running Phoenix 4.9 using the
>>> Phoenix JDBC thick client, and getting these errors consistently. Can
>>> somebody point me in the right direction as to what the issue might be?
>>>
>>> org.apache.phoenix.exception.PhoenixIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash
>>> cache for joinId: i�K���
>>>                                                                    �.
>>> The cache might have expired and have been removed.
>>> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
>>> HashJoinRegionScanner.java:102)
>>> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
>>> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
>>> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
>>> nerOpen(ScanRegionObserver.java:72)
>>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>>> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
>>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>>> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>>> cServices.java:2633)
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>>> cServices.java:2837)
>>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Clie
>>> ntService$2.callBlockingMethod(ClientProtos.java:34950)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>>> tor.java:188)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
>>> (state=08000,code=101)
>>> org.apache.phoenix.exception.PhoenixIOException:
>>> org.apache.phoenix.exception.PhoenixIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash
>>> cache for joinId: i�K���
>>>                              �. The cache might have expired and have
>>> been removed.
>>> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
>>> HashJoinRegionScanner.java:102)
>>> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
>>> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
>>> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
>>> nerOpen(ScanRegionObserver.java:72)
>>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>>> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
>>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>>> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>>> cServices.java:2633)
>>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>>> cServices.java:2837)
>>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Clie
>>> ntService$2.callBlockingMethod(ClientProtos.java:34950)
>>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>>> tor.java:188)
>>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>>> tor.java:168)
>>>
>>> at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
>>> erUtil.java:116)
>>> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
>>> BaseResultIterators.java:875)
>>> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
>>> BaseResultIterators.java:819)
>>> at org.apache.phoenix.iterate.RoundRobinResultIterator.getItera
>>> tors(RoundRobinResultIterator.java:176)
>>> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(Rou
>>> ndRobinResultIterator.java:91)
>>> at org.apache.phoenix.iterate.DelegateResultIterator.next(Deleg
>>> ateResultIterator.java:44)
>>> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultS
>>> et.java:778)
>>>
>>>
>>> Michael Prendergast
>>> *iota IT*
>>> Vice President / Software Engineer
>>> (cell)     703.594.1053 <(703)%20594-1053>
>>> (office)  571.386.4682 <(571)%20386-4682>
>>> (fax)     571.386.4681 <(571)%20386-4681>
>>>
>>> This e-mail and any attachments to it are intended only for the
>>> identified recipient(s). It may contain proprietary or otherwise legally
>>> protected information of Iota IT, Inc. Any unauthorized distribution, use
>>> or disclosure of this communication is strictly prohibited. If you have
>>> received this communication in error, please notify the sender and delete
>>> or otherwise destroy the e-mail and all attachments immediately.
>>>
>>
>>
>

Re: hash cache errors

Posted by Mike Prendergast <mi...@iotait.com>.
I think https://issues.apache.org/jira/browse/PHOENIX-4010 may be the issue
for us, is there a way I can confirm that is the case? Can I force a region
server to update its join cache in some way, as a workaround?

Michael Prendergast
*iota IT*
Vice President / Software Engineer
(cell)     703.594.1053
(office)  571.386.4682
(fax)     571.386.4681

This e-mail and any attachments to it are intended only for the identified
recipient(s). It may contain proprietary or otherwise legally protected
information of Iota IT, Inc. Any unauthorized distribution, use or
disclosure of this communication is strictly prohibited. If you have
received this communication in error, please notify the sender and delete
or otherwise destroy the e-mail and all attachments immediately.

On Fri, Jul 21, 2017 at 8:38 PM, Sergey Soldatov <se...@gmail.com>
wrote:

> Hi Mike,
>
> There are a couple reasons why it may happen:
> 1. server side cache expired. Time to live can be changed by
> phoenix.coprocessor.maxServerCacheTimeToLiveMs
> 2. Region has been moved to another region server where the join cache is
> missing. Look at https://issues.apache.org/jira/browse/PHOENIX-4010
>
> Thanks,
> Sergey
>
> On Fri, Jul 21, 2017 at 5:00 PM, Mike Prendergast <
> mikeprendergast@iotait.com> wrote:
>
>> I am connecting to an EMR 5.6 cluster running Phoenix 4.9 using the
>> Phoenix JDBC thick client, and getting these errors consistently. Can
>> somebody point me in the right direction as to what the issue might be?
>>
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
>> for joinId: i�K���
>>                                                                    �. The
>> cache might have expired and have been removed.
>> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
>> HashJoinRegionScanner.java:102)
>> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
>> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
>> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
>> nerOpen(ScanRegionObserver.java:72)
>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>> cServices.java:2633)
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>> cServices.java:2837)
>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Clie
>> ntService$2.callBlockingMethod(ClientProtos.java:34950)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>> tor.java:188)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
>> (state=08000,code=101)
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
>> for joinId: i�K���
>>                              �. The cache might have expired and have
>> been removed.
>> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
>> HashJoinRegionScanner.java:102)
>> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
>> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
>> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
>> nerOpen(ScanRegionObserver.java:72)
>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
>> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
>> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>> cServices.java:2633)
>> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRp
>> cServices.java:2837)
>> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Clie
>> ntService$2.callBlockingMethod(ClientProtos.java:34950)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>> tor.java:188)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
>> tor.java:168)
>>
>> at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
>> erUtil.java:116)
>> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
>> BaseResultIterators.java:875)
>> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
>> BaseResultIterators.java:819)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator.getItera
>> tors(RoundRobinResultIterator.java:176)
>> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(Rou
>> ndRobinResultIterator.java:91)
>> at org.apache.phoenix.iterate.DelegateResultIterator.next(Deleg
>> ateResultIterator.java:44)
>> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultS
>> et.java:778)
>>
>>
>> Michael Prendergast
>> *iota IT*
>> Vice President / Software Engineer
>> (cell)     703.594.1053 <(703)%20594-1053>
>> (office)  571.386.4682 <(571)%20386-4682>
>> (fax)     571.386.4681 <(571)%20386-4681>
>>
>> This e-mail and any attachments to it are intended only for the
>> identified recipient(s). It may contain proprietary or otherwise legally
>> protected information of Iota IT, Inc. Any unauthorized distribution, use
>> or disclosure of this communication is strictly prohibited. If you have
>> received this communication in error, please notify the sender and delete
>> or otherwise destroy the e-mail and all attachments immediately.
>>
>
>

Re: hash cache errors

Posted by Sergey Soldatov <se...@gmail.com>.
Hi Mike,

There are a couple reasons why it may happen:
1. server side cache expired. Time to live can be changed by
phoenix.coprocessor.maxServerCacheTimeToLiveMs
2. Region has been moved to another region server where the join cache is
missing. Look at https://issues.apache.org/jira/browse/PHOENIX-4010

Thanks,
Sergey

On Fri, Jul 21, 2017 at 5:00 PM, Mike Prendergast <
mikeprendergast@iotait.com> wrote:

> I am connecting to an EMR 5.6 cluster running Phoenix 4.9 using the
> Phoenix JDBC thick client, and getting these errors consistently. Can
> somebody point me in the right direction as to what the issue might be?
>
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
> for joinId: i�K���
>                                                                    �. The
> cache might have expired and have been removed.
> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
> HashJoinRegionScanner.java:102)
> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
> nerOpen(ScanRegionObserver.java:72)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
> RSRpcServices.java:2633)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
> RSRpcServices.java:2837)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$
> ClientService$2.callBlockingMethod(ClientProtos.java:34950)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
> tor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
> for joinId: i�K���
>                              �. The cache might have expired and have been
> removed.
> at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(
> HashJoinRegionScanner.java:102)
> at org.apache.phoenix.iterate.NonAggregateRegionScannerFactory.
> getRegionScanner(NonAggregateRegionScannerFactory.java:148)
> at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScan
> nerOpen(ScanRegionObserver.java:72)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
> ionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:221)
> at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$Reg
> ionScannerHolder.nextRaw(BaseScannerRegionObserver.java:266)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
> RSRpcServices.java:2633)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(
> RSRpcServices.java:2837)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$
> ClientService$2.callBlockingMethod(ClientProtos.java:34950)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
> tor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecu
> tor.java:168)
>
> at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
> erUtil.java:116)
> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
> BaseResultIterators.java:875)
> at org.apache.phoenix.iterate.BaseResultIterators.getIterators(
> BaseResultIterators.java:819)
> at org.apache.phoenix.iterate.RoundRobinResultIterator.getItera
> tors(RoundRobinResultIterator.java:176)
> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(Rou
> ndRobinResultIterator.java:91)
> at org.apache.phoenix.iterate.DelegateResultIterator.next(Deleg
> ateResultIterator.java:44)
> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultS
> et.java:778)
>
>
> Michael Prendergast
> *iota IT*
> Vice President / Software Engineer
> (cell)     703.594.1053 <(703)%20594-1053>
> (office)  571.386.4682 <(571)%20386-4682>
> (fax)     571.386.4681 <(571)%20386-4681>
>
> This e-mail and any attachments to it are intended only for the identified
> recipient(s). It may contain proprietary or otherwise legally protected
> information of Iota IT, Inc. Any unauthorized distribution, use or
> disclosure of this communication is strictly prohibited. If you have
> received this communication in error, please notify the sender and delete
> or otherwise destroy the e-mail and all attachments immediately.
>