You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Gabriel Reid (JIRA)" <ji...@apache.org> on 2014/03/16 08:19:19 UTC

[jira] [Resolved] (PHOENIX-753) Join memory issue

     [ https://issues.apache.org/jira/browse/PHOENIX-753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Gabriel Reid resolved PHOENIX-753.
----------------------------------

    Resolution: Fixed

Bulk resolve of closed issues imported from GitHub. This status was reached by first re-opening all closed imported issues and then resolving them in bulk.

> Join memory issue
> -----------------
>
>                 Key: PHOENIX-753
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-753
>             Project: Phoenix
>          Issue Type: Task
>    Affects Versions: 3.0-Release
>            Reporter: mujtaba
>            Assignee: Maryann Xue
>              Labels: bug
>
> Join over 1M/100K  works fine for first time but then I get the following exception. At this point only join query with 1M/10K works, even join query over 1M/50K fails. If I wait for a few mins/couple of GC cycles. I can again successfully run the join query over 1M/100K tables.
> com.salesforce.phoenix.memory.InsufficientMemoryException: Requested memory of 22298853 bytes could not be allocated from remaining memory of 52261345 bytes from global pool of 73703424 bytes after waiting for 10000ms.
> 	at com.salesforce.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:91)
> 	at com.salesforce.phoenix.memory.GlobalMemoryManager.access$300(GlobalMemoryManager.java:42)
> 	at com.salesforce.phoenix.memory.GlobalMemoryManager$GlobalMemoryChunk.resize(GlobalMemoryManager.java:152)
> 	at com.salesforce.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:101)
> 	at com.salesforce.phoenix.join.HashCacheFactory$HashCacheImpl.<init>(HashCacheFactory.java:78)
> 	at com.salesforce.phoenix.join.HashCacheFactory.newCache(HashCacheFactory.java:71)
> 	at com.salesforce.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:95)
> 	at com.salesforce.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:55)
> 	at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:5634)
> 	at org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3924)
> 	at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
> 	at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
> DDL: CREATE TABLE T (mypk CHAR(10) NOT NULL PRIMARY KEY,CF.column1 char(10),CF.column2 char(10),CF.column3 char(10));
> Query: select count(*) from table1M JOIN table100K on table100K.mypk = table.column1;



--
This message was sent by Atlassian JIRA
(v6.2#6252)