You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "sunnychen (JIRA)" <ji...@apache.org> on 2015/01/23 03:05:34 UTC

[jira] [Commented] (PHOENIX-1179) Support many-to-many joins

    [ https://issues.apache.org/jira/browse/PHOENIX-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14288596#comment-14288596 ] 

sunnychen commented on PHOENIX-1179:
------------------------------------

Dear J,
I run the sql,
select BIG.id from MAX_CT_STANDARD_TEST_TABLE1 as BIG JOIN CT_4 AS SMALL ON BIG.ID=SMALL.ID;

as you can see above,the table MAX_CT_STANDARD_TEST_TABLE1 has 60 million which size is 120G,consists of 20 fields,the table CT_4 has 1 million which has 1 million same id as MAX_CT_STANDARD_TEST_TABLE1 
besides,i have 2 region server,they share the same configure files,,and I set the heap size to 10g,with 40% phoenix.query.maxGlobalMemoryPercentage
then the sql runs,and it could kill one of my region server process random from time to time
i am wondering if phoenix could not support lhs table over the memory too? cause if the lhs table's size changed to 10 million,after long time waiting, the results comes out correctly
the problem is i need to join the big table,or two big tables together
what i should do? could you please give some advice to me? thank you for your help!

--java.sql.SQLException: Encountered exception in hash plan [0] execution.
--      at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
--      at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:185)
--      at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:164)
--      at org.apache.phoenix.util.phoenixContextExecutor.call(phoenixContextExecutor.java:54)
--      at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:164)
--      at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:153)
--      at org.apache.phoenix.jdbc.phoenixPreparedStatement.execute(phoenixPreparedStatement.java:147)
--      at org.apache.phoenix.jdbc.phoenixPreparedStatement.execute(phoenixPreparedStatement.java:152)
--      at org.apache.phoenix.jdbc.phoenixConnection.executeStatements(phoenixConnection.java:220)
--      at org.apache.phoenix.util.phoenixRuntime.executeStatements(phoenixRuntime.java:193)
--      at org.apache.phoenix.util.phoenixRuntime.main(phoenixRuntime.java:140)
--Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException: java.lang.reflect.UndeclaredThrowableException
--      at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
--      at org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
--      at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
--      at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
--      at java.util.concurrent.FutureTask.run(FutureTask.java:262)
--      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
--      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
--      at java.lang.Thread.run(Thread.java:745)
--Caused by: java.util.concurrent.ExecutionException: java.lang.reflect.UndeclaredThrowableException
--      at java.util.concurrent.FutureTask.report(FutureTask.java:122)
--      at java.util.concurrent.FutureTask.get(FutureTask.java:202)
--      at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
--      ... 7 more
--Caused by: java.lang.reflect.UndeclaredThrowableException
--      at com.sun.proxy.$Proxy10.addServerCache(Unknown Source)
--      at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
--      at org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
--      ... 4 more
--Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=2, exceptions:
--Wed Jan 21 15:58:28 CST 2015, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@4eb2d325, java.io.IOException: Call to nobida122/10.60.1.122:60020 failed on local exception: java.io.EOFException
--Wed Jan 21 15:58:28 CST 2015, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@4eb2d325, java.net.ConnectException: Connection refused
--
--      at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
--      at org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)

> Support many-to-many joins
> --------------------------
>
>                 Key: PHOENIX-1179
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1179
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Maryann Xue
>             Fix For: 4.3, 3.3
>
>         Attachments: 1179.patch
>
>
> Enhance our join capabilities to support many-to-many joins where the size of both sides of the join are too big to fit into memory (and thus cannot use our hash join mechanism). One technique would be to order both sides of the join by their join key and merge sort the results on the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)