You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Don Brinn <Do...@D2L.com> on 2015/11/05 21:08:48 UTC

Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

Hi,

I am seeing the following error when doing an INNER JOIN of a view with MULTI_TENANT=true with any other table or view:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for joinId: Ys�0��%�. The cache might have expired and have been removed.
        at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:95)
        at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:745)

        at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
        at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
        at sqlline.SqlLine.print(SqlLine.java:1653)
        at sqlline.Commands.execute(Commands.java:833)
        at sqlline.Commands.sql(Commands.java:732)
        at sqlline.SqlLine.dispatch(SqlLine.java:808)
        at sqlline.SqlLine.begin(SqlLine.java:681)
        at sqlline.SqlLine.start(SqlLine.java:398)
        at sqlline.SqlLine.main(SqlLine.java:292)

This is with Phoenix version 4.6.0 and HBase version 0.98.4.2.2.6.0-2800-hadoop2.

This seems very strongly related to the MULTI_TENANT=true property on a view or table.  I see the error whenever the view has MULTI_TENANT=true and I have a tenant-specific connection to Phoenix.  I do not see the problem if the MULTI_TENANT=true property is not set on the view or if I do not have a tenant-specific connection to Phoenix.

Here is an example SQL statement that has this error when the view INVENTORY has the MULTI_TENANT=true property and I have a tenant-specific connection, but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)

Note:  “INNER JOIN” fails under these conditions, as does “LEFT OUTER JOIN”.  However, “RIGHT OUTER JOIN” and “FULL OUTER JOIN” do work.  Also, if I tell Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.

This seems to be the same problem that was discussed previously in this mailing list:  https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3CCAOtKWX5XFbwkJF--0k-zj91TfdqWFq6RmuQW0R_LoJCNj1aMhg@mail.gmail.com%3E

Is this a known issue?  Is there a fix or work-around, or something I should be doing differently to avoid the problem?


Thanks,

Don Brinn



Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

Posted by Maryann Xue <ma...@gmail.com>.
Thanks a lot, Don! I have assigned the issue to myself and will look into
it shortly.

On Thu, Nov 5, 2015 at 7:10 PM, Don Brinn <Do...@d2l.com> wrote:

> Hi Maryann,
>
>
>
> I have created this defect report in JIRA:
> https://issues.apache.org/jira/browse/PHOENIX-2381
>
>
>
> Thanks,
>
>
>
> Don Brinn
>
>
>
> *From:* Maryann Xue [mailto:maryann.xue@gmail.com]
> *Sent:* Thursday, November 5, 2015 5:13 PM
> *To:* user@phoenix.apache.org
> *Subject:* Re: Error "could not find hash cache for joinId" when doing
> Inner Join with any table or view with Multi_Tenant=true
>
>
>
> Hi Don,
>
> Thank you very much for finding the issue. Would mind filing a Phoenix
> JIRA?
>
>
> Thanks,
> Maryann
>
>
>
> On Thu, Nov 5, 2015 at 3:08 PM Don Brinn <Do...@d2l.com> wrote:
>
> Hi,
>
>
>
> I am seeing the following error when doing an INNER JOIN of a view with
> MULTI_TENANT=true with any other table or view:
>
> java.lang.RuntimeException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
> for joinId: Ys�0��%�. The cache might have expired and have been removed.
>
>         at
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:95)
>
>         at
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
>
>         at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
>
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
>         at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
>
>         at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
>
>         at sqlline.SqlLine.print(SqlLine.java:1653)
>
>         at sqlline.Commands.execute(Commands.java:833)
>
>         at sqlline.Commands.sql(Commands.java:732)
>
>         at sqlline.SqlLine.dispatch(SqlLine.java:808)
>
>         at sqlline.SqlLine.begin(SqlLine.java:681)
>
>         at sqlline.SqlLine.start(SqlLine.java:398)
>
>         at sqlline.SqlLine.main(SqlLine.java:292)
>
>
>
> This is with Phoenix version 4.6.0 and HBase version
> 0.98.4.2.2.6.0-2800-hadoop2.
>
>
>
> This seems very strongly related to the MULTI_TENANT=true property on a
> view or table.  I see the error whenever the view has MULTI_TENANT=true and
> I have a tenant-specific connection to Phoenix.  I do not see the problem
> if the MULTI_TENANT=true property is not set on the view or if I do not
> have a tenant-specific connection to Phoenix.
>
>
>
> Here is an example SQL statement that has this error when the view
> INVENTORY has the MULTI_TENANT=true property and I have a tenant-specific
> connection, but that succeeds in other cases. (The view PRODUCT_IDS is
> not Multi-Tenant.)
>
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID =
> INVENTORY.ID)
>
>
>
> Note:  “INNER JOIN” fails under these conditions, as does “LEFT OUTER
> JOIN”.  However, “RIGHT OUTER JOIN” and “FULL OUTER JOIN” do work.  Also,
> if I tell Phoenix to use a Sort Join for the Inner Join or Left Outer Join
> then it does work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM
> INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>
>
>
> This seems to be the same problem that was discussed previously in this
> mailing list:
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3CCAOtKWX5XFbwkJF--0k-zj91TfdqWFq6RmuQW0R_LoJCNj1aMhg@mail.gmail.com%3E
>
>
>
> Is this a known issue?  Is there a fix or work-around, or something I
> should be doing differently to avoid the problem?
>
>
>
>
>
> Thanks,
>
>
>
> Don Brinn
>
>
>
>
>
>

RE: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

Posted by Don Brinn <Do...@D2L.com>.
Hi Maryann,

I have created this defect report in JIRA: https://issues.apache.org/jira/browse/PHOENIX-2381

Thanks,

Don Brinn

From: Maryann Xue [mailto:maryann.xue@gmail.com]
Sent: Thursday, November 5, 2015 5:13 PM
To: user@phoenix.apache.org
Subject: Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

Hi Don,

Thank you very much for finding the issue. Would mind filing a Phoenix JIRA?


Thanks,
Maryann

On Thu, Nov 5, 2015 at 3:08 PM Don Brinn <Do...@d2l.com>> wrote:
Hi,

I am seeing the following error when doing an INNER JOIN of a view with MULTI_TENANT=true with any other table or view:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for joinId: Ys�0��%�. The cache might have expired and have been removed.
        at org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:95)
        at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
        at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:745)

        at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
        at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
        at sqlline.SqlLine.print(SqlLine.java:1653)
        at sqlline.Commands.execute(Commands.java:833)
        at sqlline.Commands.sql(Commands.java:732)
        at sqlline.SqlLine.dispatch(SqlLine.java:808)
        at sqlline.SqlLine.begin(SqlLine.java:681)
        at sqlline.SqlLine.start(SqlLine.java:398)
        at sqlline.SqlLine.main(SqlLine.java:292)

This is with Phoenix version 4.6.0 and HBase version 0.98.4.2.2.6.0-2800-hadoop2.

This seems very strongly related to the MULTI_TENANT=true property on a view or table.  I see the error whenever the view has MULTI_TENANT=true and I have a tenant-specific connection to Phoenix.  I do not see the problem if the MULTI_TENANT=true property is not set on the view or if I do not have a tenant-specific connection to Phoenix.

Here is an example SQL statement that has this error when the view INVENTORY has the MULTI_TENANT=true property and I have a tenant-specific connection, but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID<http://INVENTORY.ID>)

Note:  “INNER JOIN” fails under these conditions, as does “LEFT OUTER JOIN”.  However, “RIGHT OUTER JOIN” and “FULL OUTER JOIN” do work.  Also, if I tell Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID<http://INVENTORY.ID>); works.

This seems to be the same problem that was discussed previously in this mailing list:  https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3CCAOtKWX5XFbwkJF--0k-zj91TfdqWFq6RmuQW0R_LoJCNj1aMhg@mail.gmail.com%3E

Is this a known issue?  Is there a fix or work-around, or something I should be doing differently to avoid the problem?


Thanks,

Don Brinn



Re: Error "could not find hash cache for joinId" when doing Inner Join with any table or view with Multi_Tenant=true

Posted by Maryann Xue <ma...@gmail.com>.
Hi Don,

Thank you very much for finding the issue. Would mind filing a Phoenix JIRA?


Thanks,
Maryann

On Thu, Nov 5, 2015 at 3:08 PM Don Brinn <Do...@d2l.com> wrote:

> Hi,
>
>
>
> I am seeing the following error when doing an INNER JOIN of a view with
> MULTI_TENANT=true with any other table or view:
>
> java.lang.RuntimeException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
> for joinId: Ys�0��%�. The cache might have expired and have been removed.
>
>         at
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:95)
>
>         at
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
>
>         at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
>
>         at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
>
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>
>         at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
>
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
>
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
>
>         at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
>         at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
>
>         at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
>
>         at sqlline.SqlLine.print(SqlLine.java:1653)
>
>         at sqlline.Commands.execute(Commands.java:833)
>
>         at sqlline.Commands.sql(Commands.java:732)
>
>         at sqlline.SqlLine.dispatch(SqlLine.java:808)
>
>         at sqlline.SqlLine.begin(SqlLine.java:681)
>
>         at sqlline.SqlLine.start(SqlLine.java:398)
>
>         at sqlline.SqlLine.main(SqlLine.java:292)
>
>
>
> This is with Phoenix version 4.6.0 and HBase version
> 0.98.4.2.2.6.0-2800-hadoop2.
>
>
>
> This seems very strongly related to the MULTI_TENANT=true property on a
> view or table.  I see the error whenever the view has MULTI_TENANT=true and
> I have a tenant-specific connection to Phoenix.  I do not see the problem
> if the MULTI_TENANT=true property is not set on the view or if I do not
> have a tenant-specific connection to Phoenix.
>
>
>
> Here is an example SQL statement that has this error when the view
> INVENTORY has the MULTI_TENANT=true property and I have a tenant-specific
> connection, but that succeeds in other cases. (The view PRODUCT_IDS is
> not Multi-Tenant.)
>
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID =
> INVENTORY.ID)
>
>
>
> Note:  “INNER JOIN” fails under these conditions, as does “LEFT OUTER
> JOIN”.  However, “RIGHT OUTER JOIN” and “FULL OUTER JOIN” do work.  Also,
> if I tell Phoenix to use a Sort Join for the Inner Join or Left Outer Join
> then it does work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM
> INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>
>
>
> This seems to be the same problem that was discussed previously in this
> mailing list:
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3CCAOtKWX5XFbwkJF--0k-zj91TfdqWFq6RmuQW0R_LoJCNj1aMhg@mail.gmail.com%3E
>
>
>
> Is this a known issue?  Is there a fix or work-around, or something I
> should be doing differently to avoid the problem?
>
>
>
>
>
> Thanks,
>
>
>
> Don Brinn
>
>
>
>
>