You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by "G.S.Vijay Raajaa" <gs...@gmail.com> on 2014/09/23 07:43:15 UTC

Getting InsufficientMemoryException

Hi,

    I am trying to do a join of three tables usng the following query:

*select c.c_first_name, ca.ca_city, cd.cd_education_status from
CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
c.c_first_name;*

*The size of CUSTOMER_30000 is 4.1 GB with 30million records.*

*I get the following error:*

./psql.py 10.10.5.55 test.sql
java.sql.SQLException: Encountered exception in hash plan [0] execution.
at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
at
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
at
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
at
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
at
org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
at
org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException:
java.lang.reflect.UndeclaredThrowableException
at
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException:
java.lang.reflect.UndeclaredThrowableException
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at
org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
... 8 more
Caused by: java.lang.reflect.UndeclaredThrowableException
at $Proxy10.addServerCache(Unknown Source)
at
org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
at
org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
... 5 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
after attempts=14, exceptions:
Tue Sep 23 00:25:53 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:26:02 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:26:18 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:26:43 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:27:01 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:27:10 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:27:24 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:28:16 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:28:35 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:29:09 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:30:16 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:31:22 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:32:29 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.
Tue Sep 23 00:33:35 CDT 2014,
org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398, java.io.IOException:
java.io.IOException: org.apache.phoenix.memory.InsufficientMemoryException:
Requested memory of 446623727 bytes is larger than global pool of 319507660
bytes.

at
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
at org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
... 8 more

Trials:

I tried to increase the Region Server Heap space ,
modified phoenix.query.maxGlobalMemoryPercentage as well.

I am not able to increase the global memory .

Regards,
Vijay Raajaa

Re: Getting InsufficientMemoryException

Posted by "G.S.Vijay Raajaa" <gs...@gmail.com>.
Hi Maryann,

                 After executing the same query after increasing the heap
space in region server, I get a strange error:

./psql.py 10.10.5.55 test.sql
14/10/07 02:59:52 WARN execute.HashJoinPlan: Hash plan [0] execution seems
too slow. Earlier hash cache(s) might have expired on servers.
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
for joinId: {/�PY�L�. The cache might have expired and have been removed.
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:97)
at
org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:279)
at
org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(MergeSortResultIterator.java:48)
at
org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(MergeSortResultIterator.java:63)
at
org.apache.phoenix.iterate.MergeSortResultIterator.next(MergeSortResultIterator.java:90)
at
org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:68)
at
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:40)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
at
org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:223)
at
org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
Caused by: java.util.concurrent.ExecutionException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
for joinId: {/�PY�L�. The cache might have expired and have been removed.
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
at java.util.concurrent.FutureTask.get(FutureTask.java:91)
at
org.apache.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:275)
... 9 more
Caused by: org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
for joinId: {/�PY�L�. The cache might have expired and have been removed.
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:97)
at
org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:57)
at
org.apache.phoenix.iterate.ParallelIterators$3.call(ParallelIterators.java:351)
at
org.apache.phoenix.iterate.ParallelIterators$3.call(ParallelIterators.java:346)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
for joinId: {/�PY�L�. The cache might have expired and have been removed.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
at
org.apache.hadoop.hbase.client.ServerCallable.translateException(ServerCallable.java:256)
at
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:166)
at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:211)
at
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:126)
at
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:121)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:702)
at
org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:54)
... 7 more
Caused by: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache
for joinId: {/�PY�L�. The cache might have expired and have been removed.
at
org.apache.phoenix.coprocessor.HashJoinRegionScanner.<init>(HashJoinRegionScanner.java:90)
at
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:121)
at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:89)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2429)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428)

at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1012)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:87)
at $Proxy6.openScanner(Unknown Source)
at
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:224)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:126)
at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:42)
at
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:164)

Kindly direct me.

Regards,
Vijay Raajaa G S

On Sun, Oct 5, 2014 at 2:10 AM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
> other two are server parameters. But it looks like the configuration change
> did not take effect at your client side. Could you please make sure that
> this is the only configuration that goes to the CLASSPATH of your phoenix
> client execution environment?
>
> Another thing is the exception you got was a different problem from
> Vijay's. It happened in an even earlier stage. Could you please also share
> you query? We could probably re-write it so that it can better fit the
> hash-join scheme. (Since table stats are not used in joins yet, we
> currently have to do it manually.)
>
>
> Thanks,
> Maryann
>
> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> Here it is,
>>
>> java.sql.SQLException: Encountered exception in hash plan [1] execution.
>>         at
>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>         at
>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>         at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>         at Query.main(Query.java:25)
>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>> Size of hash cache (104857684 bytes) exceeds the maximum allowed size
>> (104857600 bytes)
>>         at
>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>         at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>> I am setting hbase heap to 4 GB and phoenix properties are set as below
>>
>> <property>
>>       <name>phoenix.query.maxServerCacheBytes</name>
>>       <value>2004857600</value>
>> </property>
>> <property>
>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>       <value>40</value>
>> </property>
>> <property>
>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>       <value>1504857600</value>
>> </property>
>>
>> Thanks,
>> ~Ashish
>>
>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> Could you please let us see your error message?
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>> ashishtapdiya@gmail.com> wrote:
>>>
>>>> Hey Maryann,
>>>>
>>>> Thanks for your input. I tried both the properties but no luck.
>>>>
>>>> ~Ashish
>>>>
>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> The global cache size is set to either "
>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>>>>> likely NOT the thing you should worry about. So you can try adjusting "
>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>>>>> server configurations and see how it works.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>
>>>>>> Thanks,
>>>>>> ~Ashish
>>>>>>
>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Ashish,
>>>>>>>
>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever
>>>>>>> is *smaller*. You can try setting "phoenix.query.
>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see
>>>>>>> how it goes.
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Maryann,
>>>>>>>>
>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>>>>>> server hbase-site.xml's. However, it does not take effect.
>>>>>>>>
>>>>>>>> Phoenix 3.1
>>>>>>>> HBase .94
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> ~Ashish
>>>>>>>>
>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Xue,
>>>>>>>>>>
>>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I
>>>>>>>>>> modify the hbase-site.xml in every region server or just the file present
>>>>>>>>>> in the class path of Phoenix client?
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>
>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>
>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>>>>
>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>>>>>> query:
>>>>>>>>>>>>
>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>>>>>> c.c_first_name;*
>>>>>>>>>>>>
>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>>>
>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>
>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>>>> execution.
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>> Caused by:
>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>
>>>>>>>>>>>> Trials:
>>>>>>>>>>>>
>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>
>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
You are welcome, Vijay!


On Thu, Oct 9, 2014 at 12:48 AM, G.S.Vijay Raajaa <gs...@gmail.com>
wrote:

> Modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs parameter which
> defaults to *30,000 *solved the problem.
>
> *Thanks !!*
>
> On Wed, Oct 8, 2014 at 10:25 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com
> > wrote:
>
>> Hi Maryann,
>>
>>                  Its the same query:
>>
>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>> c.c_first_name;*
>>
>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records. The
>> CUSTOMER_DEMOGRAPHICS contains 2M records and CUSTOMER_ADDRESS contains
>> 50000 records.*
>>
>> *Regards,*
>> *Vijay Raajaa G S*
>>
>> On Tue, Oct 7, 2014 at 9:39 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> The warning you got was exactly showing the reason why you finally got
>>> that error: one of the join table query had taken too long so that the
>>> cache for other join tables expired and got invalidated. Again, could you
>>> please share your query and the size of the tables used in your query?
>>> Instead of changing the parameters to get around the problems, it might be
>>> much more efficient just to adjust the query itself. And if doable, mostly
>>> likely the query is gonna run faster as well.
>>>
>>> Besides, you might find this document helpful :
>>> http://phoenix.apache.org/joins.html
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>>
>>> On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya <ashishtapdiya@gmail.com
>>> > wrote:
>>>
>>>> Maryann,
>>>>
>>>> hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for
>>>> the help. I appreciate it.
>>>>
>>>> ~Ashish
>>>>
>>>>
>>>>
>>>> On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> The "phoenix.query.maxServerCacheBytes" is a client parameter while
>>>>> the other two are server parameters. But it looks like the configuration
>>>>> change did not take effect at your client side. Could you please make sure
>>>>> that this is the only configuration that goes to the CLASSPATH of your
>>>>> phoenix client execution environment?
>>>>>
>>>>> Another thing is the exception you got was a different problem from
>>>>> Vijay's. It happened in an even earlier stage. Could you please also share
>>>>> you query? We could probably re-write it so that it can better fit the
>>>>> hash-join scheme. (Since table stats are not used in joins yet, we
>>>>> currently have to do it manually.)
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> Here it is,
>>>>>>
>>>>>> java.sql.SQLException: Encountered exception in hash plan [1]
>>>>>> execution.
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>         at
>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>         at
>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>>>>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>>>>>         at Query.main(Query.java:25)
>>>>>> Caused by:
>>>>>> org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash
>>>>>> cache (104857684 bytes) exceeds the maximum allowed size (104857600 bytes)
>>>>>>         at
>>>>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>>>>>         at
>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>         at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>>>
>>>>>> I am setting hbase heap to 4 GB and phoenix properties are set as
>>>>>> below
>>>>>>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxServerCacheBytes</name>
>>>>>>       <value>2004857600</value>
>>>>>> </property>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>>>>>       <value>40</value>
>>>>>> </property>
>>>>>> <property>
>>>>>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>>>>>       <value>1504857600</value>
>>>>>> </property>
>>>>>>
>>>>>> Thanks,
>>>>>> ~Ashish
>>>>>>
>>>>>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Ashish,
>>>>>>>
>>>>>>> Could you please let us see your error message?
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hey Maryann,
>>>>>>>>
>>>>>>>> Thanks for your input. I tried both the properties but no luck.
>>>>>>>>
>>>>>>>> ~Ashish
>>>>>>>>
>>>>>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi Ashish,
>>>>>>>>>
>>>>>>>>> The global cache size is set to either "
>>>>>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is
>>>>>>>>> most likely NOT the thing you should worry about. So you can try adjusting "
>>>>>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in
>>>>>>>>> region server configurations and see how it works.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> ~Ashish
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <
>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Ashish,
>>>>>>>>>>>
>>>>>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize",
>>>>>>>>>>> whichever is *smaller*. You can try setting "phoenix.query.
>>>>>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and
>>>>>>>>>>> see how it goes.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Maryann,
>>>>>>>>>>>>
>>>>>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client
>>>>>>>>>>>> and server hbase-site.xml's. However, it does not take effect.
>>>>>>>>>>>>
>>>>>>>>>>>> Phoenix 3.1
>>>>>>>>>>>> HBase .94
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> ~Ashish
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <
>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Xue,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>           Thanks for replying. I did modify the
>>>>>>>>>>>>>> hbase-site.xml by increasing the default value of
>>>>>>>>>>>>>> phoenix.query.maxGlobalMemoryPercentage . Also increased the
>>>>>>>>>>>>>> Region server heap space memory . The change didn't get reflected and I
>>>>>>>>>>>>>> still get the error with an indication that "global pool of
>>>>>>>>>>>>>> 319507660 bytes" is present. Should I modify the hbase-site.xml in every
>>>>>>>>>>>>>> region server or just the file present in the class path of Phoenix client?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000
>>>>>>>>>>>>>>> *while joining the other two tables at the same time, which
>>>>>>>>>>>>>>> means the region server memory for Phoenix should be large enough to hold 2
>>>>>>>>>>>>>>> tables together and you also need to expect some memory expansion for java
>>>>>>>>>>>>>>> objects.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you
>>>>>>>>>>>>>>> make sure that the parameters actually took effect after modification?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     I am trying to do a join of three tables usng the
>>>>>>>>>>>>>>>> following query:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status
>>>>>>>>>>>>>>>> from CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on
>>>>>>>>>>>>>>>> c.c_current_cdemo_sk = cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on
>>>>>>>>>>>>>>>> c.c_current_addr_sk = ca.ca_address_sk group by ca.ca_city,
>>>>>>>>>>>>>>>> cd.cd_education_status, c.c_first_name;*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million
>>>>>>>>>>>>>>>> records.*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan
>>>>>>>>>>>>>>>> [0] execution.
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>>>>>> Caused by:
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Trials:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by "G.S.Vijay Raajaa" <gs...@gmail.com>.
Modifying phoenix.coprocessor.maxServerCacheTimeToLiveMs parameter which
defaults to *30,000 *solved the problem.

*Thanks !!*

On Wed, Oct 8, 2014 at 10:25 AM, G.S.Vijay Raajaa <gs...@gmail.com>
wrote:

> Hi Maryann,
>
>                  Its the same query:
>
> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
> c.c_first_name;*
>
> *The size of CUSTOMER_30000 is 4.1 GB with 30million records. The
> CUSTOMER_DEMOGRAPHICS contains 2M records and CUSTOMER_ADDRESS contains
> 50000 records.*
>
> *Regards,*
> *Vijay Raajaa G S*
>
> On Tue, Oct 7, 2014 at 9:39 PM, Maryann Xue <ma...@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> The warning you got was exactly showing the reason why you finally got
>> that error: one of the join table query had taken too long so that the
>> cache for other join tables expired and got invalidated. Again, could you
>> please share your query and the size of the tables used in your query?
>> Instead of changing the parameters to get around the problems, it might be
>> much more efficient just to adjust the query itself. And if doable, mostly
>> likely the query is gonna run faster as well.
>>
>> Besides, you might find this document helpful :
>> http://phoenix.apache.org/joins.html
>>
>>
>> Thanks,
>> Maryann
>>
>>
>> On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya <as...@gmail.com>
>> wrote:
>>
>>> Maryann,
>>>
>>> hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for
>>> the help. I appreciate it.
>>>
>>> ~Ashish
>>>
>>>
>>>
>>> On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
>>>> other two are server parameters. But it looks like the configuration change
>>>> did not take effect at your client side. Could you please make sure that
>>>> this is the only configuration that goes to the CLASSPATH of your phoenix
>>>> client execution environment?
>>>>
>>>> Another thing is the exception you got was a different problem from
>>>> Vijay's. It happened in an even earlier stage. Could you please also share
>>>> you query? We could probably re-write it so that it can better fit the
>>>> hash-join scheme. (Since table stats are not used in joins yet, we
>>>> currently have to do it manually.)
>>>>
>>>>
>>>> Thanks,
>>>> Maryann
>>>>
>>>> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <
>>>> ashishtapdiya@gmail.com> wrote:
>>>>
>>>>> Here it is,
>>>>>
>>>>> java.sql.SQLException: Encountered exception in hash plan [1]
>>>>> execution.
>>>>>         at
>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>         at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>         at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>         at
>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>         at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>         at
>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>>>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>>>>         at Query.main(Query.java:25)
>>>>> Caused by:
>>>>> org.apache.phoenix.join.MaxServerCacheSizeExceededException: Size of hash
>>>>> cache (104857684 bytes) exceeds the maximum allowed size (104857600 bytes)
>>>>>         at
>>>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>>>>         at
>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>>>>         at
>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>         at
>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>>
>>>>> I am setting hbase heap to 4 GB and phoenix properties are set as below
>>>>>
>>>>> <property>
>>>>>       <name>phoenix.query.maxServerCacheBytes</name>
>>>>>       <value>2004857600</value>
>>>>> </property>
>>>>> <property>
>>>>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>>>>       <value>40</value>
>>>>> </property>
>>>>> <property>
>>>>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>>>>       <value>1504857600</value>
>>>>> </property>
>>>>>
>>>>> Thanks,
>>>>> ~Ashish
>>>>>
>>>>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Ashish,
>>>>>>
>>>>>> Could you please let us see your error message?
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>
>>>>>>> Hey Maryann,
>>>>>>>
>>>>>>> Thanks for your input. I tried both the properties but no luck.
>>>>>>>
>>>>>>> ~Ashish
>>>>>>>
>>>>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Ashish,
>>>>>>>>
>>>>>>>> The global cache size is set to either "
>>>>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is
>>>>>>>> most likely NOT the thing you should worry about. So you can try adjusting "
>>>>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in
>>>>>>>> region server configurations and see how it works.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> ~Ashish
>>>>>>>>>
>>>>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <
>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Ashish,
>>>>>>>>>>
>>>>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize",
>>>>>>>>>> whichever is *smaller*. You can try setting "phoenix.query.
>>>>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and
>>>>>>>>>> see how it goes.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Maryann
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Maryann,
>>>>>>>>>>>
>>>>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client
>>>>>>>>>>> and server hbase-site.xml's. However, it does not take effect.
>>>>>>>>>>>
>>>>>>>>>>> Phoenix 3.1
>>>>>>>>>>> HBase .94
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> ~Ashish
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <
>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Xue,
>>>>>>>>>>>>>
>>>>>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml
>>>>>>>>>>>>> by increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I
>>>>>>>>>>>>> modify the hbase-site.xml in every region server or just the file present
>>>>>>>>>>>>> in the class path of Phoenix client?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000
>>>>>>>>>>>>>> *while joining the other two tables at the same time, which
>>>>>>>>>>>>>> means the region server memory for Phoenix should be large enough to hold 2
>>>>>>>>>>>>>> tables together and you also need to expect some memory expansion for java
>>>>>>>>>>>>>> objects.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you
>>>>>>>>>>>>>> make sure that the parameters actually took effect after modification?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>     I am trying to do a join of three tables usng the
>>>>>>>>>>>>>>> following query:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status
>>>>>>>>>>>>>>> from CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on
>>>>>>>>>>>>>>> c.c_current_cdemo_sk = cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on
>>>>>>>>>>>>>>> c.c_current_addr_sk = ca.ca_address_sk group by ca.ca_city,
>>>>>>>>>>>>>>> cd.cd_education_status, c.c_first_name;*
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million
>>>>>>>>>>>>>>> records.*
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan
>>>>>>>>>>>>>>> [0] execution.
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>>>>> Caused by:
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>>>>> at
>>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Trials:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Maryann
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Thanks,
>>>>>>>>>> Maryann
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>

Re: Getting InsufficientMemoryException

Posted by "G.S.Vijay Raajaa" <gs...@gmail.com>.
Hi Maryann,

                 Its the same query:

*select c.c_first_name, ca.ca_city, cd.cd_education_status from
CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
c.c_first_name;*

*The size of CUSTOMER_30000 is 4.1 GB with 30million records. The
CUSTOMER_DEMOGRAPHICS contains 2M records and CUSTOMER_ADDRESS contains
50000 records.*

*Regards,*
*Vijay Raajaa G S*

On Tue, Oct 7, 2014 at 9:39 PM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> The warning you got was exactly showing the reason why you finally got
> that error: one of the join table query had taken too long so that the
> cache for other join tables expired and got invalidated. Again, could you
> please share your query and the size of the tables used in your query?
> Instead of changing the parameters to get around the problems, it might be
> much more efficient just to adjust the query itself. And if doable, mostly
> likely the query is gonna run faster as well.
>
> Besides, you might find this document helpful :
> http://phoenix.apache.org/joins.html
>
>
> Thanks,
> Maryann
>
>
> On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> Maryann,
>>
>> hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for
>> the help. I appreciate it.
>>
>> ~Ashish
>>
>>
>>
>> On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
>>> other two are server parameters. But it looks like the configuration change
>>> did not take effect at your client side. Could you please make sure that
>>> this is the only configuration that goes to the CLASSPATH of your phoenix
>>> client execution environment?
>>>
>>> Another thing is the exception you got was a different problem from
>>> Vijay's. It happened in an even earlier stage. Could you please also share
>>> you query? We could probably re-write it so that it can better fit the
>>> hash-join scheme. (Since table stats are not used in joins yet, we
>>> currently have to do it manually.)
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <ashishtapdiya@gmail.com
>>> > wrote:
>>>
>>>> Here it is,
>>>>
>>>> java.sql.SQLException: Encountered exception in hash plan [1] execution.
>>>>         at
>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>         at
>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>         at
>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>         at
>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>         at
>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>         at
>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>>>         at Query.main(Query.java:25)
>>>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>>>> Size of hash cache (104857684 bytes) exceeds the maximum allowed size
>>>> (104857600 bytes)
>>>>         at
>>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>>>         at
>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>>>         at
>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>         at
>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>
>>>> I am setting hbase heap to 4 GB and phoenix properties are set as below
>>>>
>>>> <property>
>>>>       <name>phoenix.query.maxServerCacheBytes</name>
>>>>       <value>2004857600</value>
>>>> </property>
>>>> <property>
>>>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>>>       <value>40</value>
>>>> </property>
>>>> <property>
>>>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>>>       <value>1504857600</value>
>>>> </property>
>>>>
>>>> Thanks,
>>>> ~Ashish
>>>>
>>>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> Could you please let us see your error message?
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> Hey Maryann,
>>>>>>
>>>>>> Thanks for your input. I tried both the properties but no luck.
>>>>>>
>>>>>> ~Ashish
>>>>>>
>>>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Ashish,
>>>>>>>
>>>>>>> The global cache size is set to either "
>>>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is
>>>>>>> most likely NOT the thing you should worry about. So you can try adjusting "
>>>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in
>>>>>>> region server configurations and see how it works.
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>
>>>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> ~Ashish
>>>>>>>>
>>>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi Ashish,
>>>>>>>>>
>>>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize",
>>>>>>>>> whichever is *smaller*. You can try setting "phoenix.query.
>>>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see
>>>>>>>>> how it goes.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Maryann,
>>>>>>>>>>
>>>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client
>>>>>>>>>> and server hbase-site.xml's. However, it does not take effect.
>>>>>>>>>>
>>>>>>>>>> Phoenix 3.1
>>>>>>>>>> HBase .94
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> ~Ashish
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <
>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Xue,
>>>>>>>>>>>>
>>>>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml
>>>>>>>>>>>> by increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I
>>>>>>>>>>>> modify the hbase-site.xml in every region server or just the file present
>>>>>>>>>>>> in the class path of Phoenix client?
>>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000
>>>>>>>>>>>>> *while joining the other two tables at the same time, which
>>>>>>>>>>>>> means the region server memory for Phoenix should be large enough to hold 2
>>>>>>>>>>>>> tables together and you also need to expect some memory expansion for java
>>>>>>>>>>>>> objects.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you
>>>>>>>>>>>>> make sure that the parameters actually took effect after modification?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     I am trying to do a join of three tables usng the
>>>>>>>>>>>>>> following query:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status
>>>>>>>>>>>>>> from CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on
>>>>>>>>>>>>>> c.c_current_cdemo_sk = cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on
>>>>>>>>>>>>>> c.c_current_addr_sk = ca.ca_address_sk group by ca.ca_city,
>>>>>>>>>>>>>> cd.cd_education_status, c.c_first_name;*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>>>>>> execution.
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>>>> Caused by:
>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>>>> at
>>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Trials:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Maryann
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Ashish,

The warning you got was exactly showing the reason why you finally got that
error: one of the join table query had taken too long so that the cache for
other join tables expired and got invalidated. Again, could you please
share your query and the size of the tables used in your query? Instead of
changing the parameters to get around the problems, it might be much more
efficient just to adjust the query itself. And if doable, mostly likely the
query is gonna run faster as well.

Besides, you might find this document helpful :
http://phoenix.apache.org/joins.html


Thanks,
Maryann


On Tue, Oct 7, 2014 at 11:01 AM, ashish tapdiya <as...@gmail.com>
wrote:

> Maryann,
>
> hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for the
> help. I appreciate it.
>
> ~Ashish
>
>
>
> On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <ma...@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
>> other two are server parameters. But it looks like the configuration change
>> did not take effect at your client side. Could you please make sure that
>> this is the only configuration that goes to the CLASSPATH of your phoenix
>> client execution environment?
>>
>> Another thing is the exception you got was a different problem from
>> Vijay's. It happened in an even earlier stage. Could you please also share
>> you query? We could probably re-write it so that it can better fit the
>> hash-join scheme. (Since table stats are not used in joins yet, we
>> currently have to do it manually.)
>>
>>
>> Thanks,
>> Maryann
>>
>> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <as...@gmail.com>
>> wrote:
>>
>>> Here it is,
>>>
>>> java.sql.SQLException: Encountered exception in hash plan [1] execution.
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>         at
>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>         at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>>         at Query.main(Query.java:25)
>>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>>> Size of hash cache (104857684 bytes) exceeds the maximum allowed size
>>> (104857600 bytes)
>>>         at
>>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>>         at
>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>         at
>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:744)
>>>
>>> I am setting hbase heap to 4 GB and phoenix properties are set as below
>>>
>>> <property>
>>>       <name>phoenix.query.maxServerCacheBytes</name>
>>>       <value>2004857600</value>
>>> </property>
>>> <property>
>>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>>       <value>40</value>
>>> </property>
>>> <property>
>>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>>       <value>1504857600</value>
>>> </property>
>>>
>>> Thanks,
>>> ~Ashish
>>>
>>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Could you please let us see your error message?
>>>>
>>>>
>>>> Thanks,
>>>> Maryann
>>>>
>>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>>> ashishtapdiya@gmail.com> wrote:
>>>>
>>>>> Hey Maryann,
>>>>>
>>>>> Thanks for your input. I tried both the properties but no luck.
>>>>>
>>>>> ~Ashish
>>>>>
>>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Ashish,
>>>>>>
>>>>>> The global cache size is set to either "
>>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>>>>>> likely NOT the thing you should worry about. So you can try adjusting "
>>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>>>>>> server configurations and see how it works.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>
>>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> ~Ashish
>>>>>>>
>>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Ashish,
>>>>>>>>
>>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever
>>>>>>>> is *smaller*. You can try setting "phoenix.query.
>>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see
>>>>>>>> how it goes.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Maryann,
>>>>>>>>>
>>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client
>>>>>>>>> and server hbase-site.xml's. However, it does not take effect.
>>>>>>>>>
>>>>>>>>> Phoenix 3.1
>>>>>>>>> HBase .94
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> ~Ashish
>>>>>>>>>
>>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <
>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Xue,
>>>>>>>>>>>
>>>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml
>>>>>>>>>>> by increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I
>>>>>>>>>>> modify the hbase-site.xml in every region server or just the file present
>>>>>>>>>>> in the class path of Phoenix client?
>>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>
>>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>>>>>
>>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Maryann
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>
>>>>>>>>>>>>>     I am trying to do a join of three tables usng the
>>>>>>>>>>>>> following query:
>>>>>>>>>>>>>
>>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status
>>>>>>>>>>>>> from CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on
>>>>>>>>>>>>> c.c_current_cdemo_sk = cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on
>>>>>>>>>>>>> c.c_current_addr_sk = ca.ca_address_sk group by ca.ca_city,
>>>>>>>>>>>>> cd.cd_education_status, c.c_first_name;*
>>>>>>>>>>>>>
>>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>>>>
>>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>>
>>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>>>>> execution.
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>> at
>>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>>> Caused by:
>>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>>
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>>> at
>>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>>
>>>>>>>>>>>>> Trials:
>>>>>>>>>>>>>
>>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Maryann
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Thanks,
>>>>>>>>>> Maryann
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by ashish tapdiya <as...@gmail.com>.
Maryann,

hbase-site.xml was not on CLASSPATH and that was the issue. Thanks for the
help. I appreciate it.

~Ashish



On Sat, Oct 4, 2014 at 3:40 PM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> The "phoenix.query.maxServerCacheBytes" is a client parameter while the
> other two are server parameters. But it looks like the configuration change
> did not take effect at your client side. Could you please make sure that
> this is the only configuration that goes to the CLASSPATH of your phoenix
> client execution environment?
>
> Another thing is the exception you got was a different problem from
> Vijay's. It happened in an even earlier stage. Could you please also share
> you query? We could probably re-write it so that it can better fit the
> hash-join scheme. (Since table stats are not used in joins yet, we
> currently have to do it manually.)
>
>
> Thanks,
> Maryann
>
> On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> Here it is,
>>
>> java.sql.SQLException: Encountered exception in hash plan [1] execution.
>>         at
>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>         at
>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>         at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>>         at Query.main(Query.java:25)
>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>> Size of hash cache (104857684 bytes) exceeds the maximum allowed size
>> (104857600 bytes)
>>         at
>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>>         at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>> I am setting hbase heap to 4 GB and phoenix properties are set as below
>>
>> <property>
>>       <name>phoenix.query.maxServerCacheBytes</name>
>>       <value>2004857600</value>
>> </property>
>> <property>
>>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>>       <value>40</value>
>> </property>
>> <property>
>>       <name>phoenix.query.maxGlobalMemorySize</name>
>>       <value>1504857600</value>
>> </property>
>>
>> Thanks,
>> ~Ashish
>>
>> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> Could you please let us see your error message?
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <
>>> ashishtapdiya@gmail.com> wrote:
>>>
>>>> Hey Maryann,
>>>>
>>>> Thanks for your input. I tried both the properties but no luck.
>>>>
>>>> ~Ashish
>>>>
>>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> The global cache size is set to either "
>>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>>>>> likely NOT the thing you should worry about. So you can try adjusting "
>>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>>>>> server configurations and see how it works.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>>
>>>>>> Thanks,
>>>>>> ~Ashish
>>>>>>
>>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Ashish,
>>>>>>>
>>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever
>>>>>>> is *smaller*. You can try setting "phoenix.query.
>>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see
>>>>>>> how it goes.
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Maryann,
>>>>>>>>
>>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>>>>>> server hbase-site.xml's. However, it does not take effect.
>>>>>>>>
>>>>>>>> Phoenix 3.1
>>>>>>>> HBase .94
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> ~Ashish
>>>>>>>>
>>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Yes, you should make your modification on each region server,
>>>>>>>>> since this is a server-side configuration.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Xue,
>>>>>>>>>>
>>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I
>>>>>>>>>> modify the hbase-site.xml in every region server or just the file present
>>>>>>>>>> in the class path of Phoenix client?
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>>
>>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>
>>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>>>>
>>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>>>>>> query:
>>>>>>>>>>>>
>>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>>>>>> c.c_first_name;*
>>>>>>>>>>>>
>>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>>>
>>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>>
>>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>>>> execution.
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>>> ... 5 more
>>>>>>>>>>>> Caused by:
>>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>>
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>>> at
>>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>>> ... 8 more
>>>>>>>>>>>>
>>>>>>>>>>>> Trials:
>>>>>>>>>>>>
>>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>>
>>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Maryann
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Ashish,

The "phoenix.query.maxServerCacheBytes" is a client parameter while the
other two are server parameters. But it looks like the configuration change
did not take effect at your client side. Could you please make sure that
this is the only configuration that goes to the CLASSPATH of your phoenix
client execution environment?

Another thing is the exception you got was a different problem from
Vijay's. It happened in an even earlier stage. Could you please also share
you query? We could probably re-write it so that it can better fit the
hash-join scheme. (Since table stats are not used in joins yet, we
currently have to do it manually.)


Thanks,
Maryann

On Tue, Sep 30, 2014 at 1:22 PM, ashish tapdiya <as...@gmail.com>
wrote:

> Here it is,
>
> java.sql.SQLException: Encountered exception in hash plan [1] execution.
>         at
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>         at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>         at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>         at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>         at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>         at
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
>         at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
>         at Query.main(Query.java:25)
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
> Size of hash cache (104857684 bytes) exceeds the maximum allowed size
> (104857600 bytes)
>         at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
>         at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
>         at
> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>         at
> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
> I am setting hbase heap to 4 GB and phoenix properties are set as below
>
> <property>
>       <name>phoenix.query.maxServerCacheBytes</name>
>       <value>2004857600</value>
> </property>
> <property>
>       <name>phoenix.query.maxGlobalMemoryPercentage</name>
>       <value>40</value>
> </property>
> <property>
>       <name>phoenix.query.maxGlobalMemorySize</name>
>       <value>1504857600</value>
> </property>
>
> Thanks,
> ~Ashish
>
> On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com>
> wrote:
>
>> Hi Ashish,
>>
>> Could you please let us see your error message?
>>
>>
>> Thanks,
>> Maryann
>>
>> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <ashishtapdiya@gmail.com
>> > wrote:
>>
>>> Hey Maryann,
>>>
>>> Thanks for your input. I tried both the properties but no luck.
>>>
>>> ~Ashish
>>>
>>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> The global cache size is set to either "
>>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>>>> likely NOT the thing you should worry about. So you can try adjusting "
>>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>>>> server configurations and see how it works.
>>>>
>>>>
>>>> Thanks,
>>>> Maryann
>>>>
>>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>>> ashishtapdiya@gmail.com> wrote:
>>>>
>>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>>
>>>>> Thanks,
>>>>> ~Ashish
>>>>>
>>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Ashish,
>>>>>>
>>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever
>>>>>> is *smaller*. You can try setting "phoenix.query.
>>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see
>>>>>> how it goes.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Maryann,
>>>>>>>
>>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>>>>> server hbase-site.xml's. However, it does not take effect.
>>>>>>>
>>>>>>> Phoenix 3.1
>>>>>>> HBase .94
>>>>>>>
>>>>>>> Thanks,
>>>>>>> ~Ashish
>>>>>>>
>>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, you should make your modification on each region server, since
>>>>>>>> this is a server-side configuration.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Xue,
>>>>>>>>>
>>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>>> that "global pool of 319507660 bytes" is present. Should I modify
>>>>>>>>> the hbase-site.xml in every region server or just the file present in
>>>>>>>>> the class path of Phoenix client?
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Vijay Raajaa G S
>>>>>>>>>
>>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <
>>>>>>>>> maryann.xue@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Vijay,
>>>>>>>>>>
>>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>>>
>>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Maryann
>>>>>>>>>>
>>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>>>>> query:
>>>>>>>>>>>
>>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>>>>> c.c_first_name;*
>>>>>>>>>>>
>>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>>
>>>>>>>>>>> *I get the following error:*
>>>>>>>>>>>
>>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>>> execution.
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>>> at
>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>>> at
>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>>> at
>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>> at
>>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>>> ... 8 more
>>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>>> ... 5 more
>>>>>>>>>>> Caused by:
>>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>>
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>>> at
>>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>>> ... 8 more
>>>>>>>>>>>
>>>>>>>>>>> Trials:
>>>>>>>>>>>
>>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>>
>>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Thanks,
>>>>>>>>>> Maryann
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by ashish tapdiya <as...@gmail.com>.
Here it is,

java.sql.SQLException: Encountered exception in hash plan [1] execution.
        at
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
        at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
        at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
        at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
        at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
        at
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:158)
        at Query.sel_Cust_Order_OrderLine_Tables(Query.java:135)
        at Query.main(Query.java:25)
Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
Size of hash cache (104857684 bytes) exceeds the maximum allowed size
(104857600 bytes)
        at
org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:106)
        at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
        at
org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
        at
org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

I am setting hbase heap to 4 GB and phoenix properties are set as below

<property>
      <name>phoenix.query.maxServerCacheBytes</name>
      <value>2004857600</value>
</property>
<property>
      <name>phoenix.query.maxGlobalMemoryPercentage</name>
      <value>40</value>
</property>
<property>
      <name>phoenix.query.maxGlobalMemorySize</name>
      <value>1504857600</value>
</property>

Thanks,
~Ashish

On Tue, Sep 30, 2014 at 12:13 PM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> Could you please let us see your error message?
>
>
> Thanks,
> Maryann
>
> On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> Hey Maryann,
>>
>> Thanks for your input. I tried both the properties but no luck.
>>
>> ~Ashish
>>
>> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> The global cache size is set to either "
>>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>>> * heapSize" (Sorry about the mistake I made earlier). The ""
>>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>>> likely NOT the thing you should worry about. So you can try adjusting "
>>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>>> server configurations and see how it works.
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <
>>> ashishtapdiya@gmail.com> wrote:
>>>
>>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>>> remains the default value of 100 MB. I get to see it when join fails.
>>>>
>>>> Thanks,
>>>> ~Ashish
>>>>
>>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
>>>>> *smaller*. You can try setting "phoenix.query.
>>>>> maxGlobalMemoryPercentage" instead, which is recommended, and see how
>>>>> it goes.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>>> ashishtapdiya@gmail.com> wrote:
>>>>>
>>>>>> Hi Maryann,
>>>>>>
>>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>>>> server hbase-site.xml's. However, it does not take effect.
>>>>>>
>>>>>> Phoenix 3.1
>>>>>> HBase .94
>>>>>>
>>>>>> Thanks,
>>>>>> ~Ashish
>>>>>>
>>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Yes, you should make your modification on each region server, since
>>>>>>> this is a server-side configuration.
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Xue,
>>>>>>>>
>>>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>>> that "global pool of 319507660 bytes" is present. Should I modify
>>>>>>>> the hbase-site.xml in every region server or just the file present in
>>>>>>>> the class path of Phoenix client?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Vijay Raajaa G S
>>>>>>>>
>>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <maryann.xue@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi Vijay,
>>>>>>>>>
>>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>>
>>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>>>> query:
>>>>>>>>>>
>>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>>>> c.c_first_name;*
>>>>>>>>>>
>>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>>
>>>>>>>>>> *I get the following error:*
>>>>>>>>>>
>>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>>> execution.
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>> at
>>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>>> ... 8 more
>>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>>> at
>>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>>> ... 5 more
>>>>>>>>>> Caused by:
>>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>>> attempts=14, exceptions:
>>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>>
>>>>>>>>>> at
>>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>>> at
>>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>>> ... 8 more
>>>>>>>>>>
>>>>>>>>>> Trials:
>>>>>>>>>>
>>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>>
>>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Vijay Raajaa
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Thanks,
>>>>>>>>> Maryann
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Ashish,

Could you please let us see your error message?


Thanks,
Maryann

On Tue, Sep 30, 2014 at 12:58 PM, ashish tapdiya <as...@gmail.com>
wrote:

> Hey Maryann,
>
> Thanks for your input. I tried both the properties but no luck.
>
> ~Ashish
>
> On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com>
> wrote:
>
>> Hi Ashish,
>>
>> The global cache size is set to either "
>> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
>> * heapSize" (Sorry about the mistake I made earlier). The ""
>> phoenix.query.maxServerCacheBytes" is a client parameter and is most
>> likely NOT the thing you should worry about. So you can try adjusting "
>> phoenix.query.maxGlobalMemoryPercentage" and the heap size in region
>> server configurations and see how it works.
>>
>>
>> Thanks,
>> Maryann
>>
>> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <ashishtapdiya@gmail.com
>> > wrote:
>>
>>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>>> remains the default value of 100 MB. I get to see it when join fails.
>>>
>>> Thanks,
>>> ~Ashish
>>>
>>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
>>>> *smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
>>>> instead, which is recommended, and see how it goes.
>>>>
>>>>
>>>> Thanks,
>>>> Maryann
>>>>
>>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <
>>>> ashishtapdiya@gmail.com> wrote:
>>>>
>>>>> Hi Maryann,
>>>>>
>>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>>> server hbase-site.xml's. However, it does not take effect.
>>>>>
>>>>> Phoenix 3.1
>>>>> HBase .94
>>>>>
>>>>> Thanks,
>>>>> ~Ashish
>>>>>
>>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Yes, you should make your modification on each region server, since
>>>>>> this is a server-side configuration.
>>>>>>
>>>>>>
>>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Xue,
>>>>>>>
>>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>>> . Also increased the Region server heap space memory . The
>>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>>> that "global pool of 319507660 bytes" is present. Should I modify
>>>>>>> the hbase-site.xml in every region server or just the file present in
>>>>>>> the class path of Phoenix client?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Vijay Raajaa G S
>>>>>>>
>>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Vijay,
>>>>>>>>
>>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>>
>>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make
>>>>>>>> sure that the parameters actually took effect after modification?
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>>> query:
>>>>>>>>>
>>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>>> c.c_first_name;*
>>>>>>>>>
>>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>>
>>>>>>>>> *I get the following error:*
>>>>>>>>>
>>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>>> execution.
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>>> at
>>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>>> ... 8 more
>>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>>> at
>>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>>> ... 5 more
>>>>>>>>> Caused by:
>>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>>> attempts=14, exceptions:
>>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>>
>>>>>>>>> at
>>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>>> at
>>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>>> ... 8 more
>>>>>>>>>
>>>>>>>>> Trials:
>>>>>>>>>
>>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>>
>>>>>>>>> I am not able to increase the global memory .
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Vijay Raajaa
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>> Maryann
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by ashish tapdiya <as...@gmail.com>.
Hey Maryann,

Thanks for your input. I tried both the properties but no luck.

~Ashish

On Sun, Sep 28, 2014 at 8:31 PM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> The global cache size is set to either "
> *phoenix.query.maxGlobalMemorySize*" or "phoenix.query.maxGlobalMemoryPercentage
> * heapSize" (Sorry about the mistake I made earlier). The ""phoenix.query.maxServerCacheBytes"
> is a client parameter and is most likely NOT the thing you should worry
> about. So you can try adjusting "phoenix.query.maxGlobalMemoryPercentage"
> and the heap size in region server configurations and see how it works.
>
>
> Thanks,
> Maryann
>
> On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
>> remains the default value of 100 MB. I get to see it when join fails.
>>
>> Thanks,
>> ~Ashish
>>
>> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Ashish,
>>>
>>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
>>> *smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
>>> instead, which is recommended, and see how it goes.
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <ashishtapdiya@gmail.com
>>> > wrote:
>>>
>>>> Hi Maryann,
>>>>
>>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>>> server hbase-site.xml's. However, it does not take effect.
>>>>
>>>> Phoenix 3.1
>>>> HBase .94
>>>>
>>>> Thanks,
>>>> ~Ashish
>>>>
>>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Yes, you should make your modification on each region server, since
>>>>> this is a server-side configuration.
>>>>>
>>>>>
>>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>
>>>>>> Hi Xue,
>>>>>>
>>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>>> . Also increased the Region server heap space memory . The
>>>>>> change didn't get reflected and I still get the error with an indication
>>>>>> that "global pool of 319507660 bytes" is present. Should I modify
>>>>>> the hbase-site.xml in every region server or just the file present in
>>>>>> the class path of Phoenix client?
>>>>>>
>>>>>> Regards,
>>>>>> Vijay Raajaa G S
>>>>>>
>>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Vijay,
>>>>>>>
>>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>>> joining the other two tables at the same time, which means the region
>>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>>
>>>>>>> Do you mean that after you had modified the parameters you
>>>>>>> mentioned, you were still getting the same error message with exactly the
>>>>>>> same numbers as "global pool of 319507660 bytes"? Did you make sure
>>>>>>> that the parameters actually took effect after modification?
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>>> query:
>>>>>>>>
>>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>>> c.c_first_name;*
>>>>>>>>
>>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>>
>>>>>>>> *I get the following error:*
>>>>>>>>
>>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>>> execution.
>>>>>>>> at
>>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>>> Caused by: java.sql.SQLException:
>>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>> at
>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>>> at
>>>>>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>>> at
>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>>> at
>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>>> at
>>>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>>> ... 8 more
>>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>>> at
>>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>>> ... 5 more
>>>>>>>> Caused by:
>>>>>>>> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
>>>>>>>> attempts=14, exceptions:
>>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>>
>>>>>>>> at
>>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>>> at
>>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>>> ... 8 more
>>>>>>>>
>>>>>>>> Trials:
>>>>>>>>
>>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>>
>>>>>>>> I am not able to increase the global memory .
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Vijay Raajaa
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Maryann
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Ashish,

The global cache size is set to either "*phoenix.query.maxGlobalMemorySize*"
or "phoenix.query.maxGlobalMemoryPercentage * heapSize" (Sorry about the
mistake I made earlier). The ""phoenix.query.maxServerCacheBytes" is a
client parameter and is most likely NOT the thing you should worry about.
So you can try adjusting "phoenix.query.maxGlobalMemoryPercentage" and the
heap size in region server configurations and see how it works.


Thanks,
Maryann

On Fri, Sep 26, 2014 at 10:48 PM, ashish tapdiya <as...@gmail.com>
wrote:

> I have tried that as well...but "phoenix.query.maxServerCacheBytes"
> remains the default value of 100 MB. I get to see it when join fails.
>
> Thanks,
> ~Ashish
>
> On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com>
> wrote:
>
>> Hi Ashish,
>>
>> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
>> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
>> *smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
>> instead, which is recommended, and see how it goes.
>>
>>
>> Thanks,
>> Maryann
>>
>> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <as...@gmail.com>
>> wrote:
>>
>>> Hi Maryann,
>>>
>>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>>> server hbase-site.xml's. However, it does not take effect.
>>>
>>> Phoenix 3.1
>>> HBase .94
>>>
>>> Thanks,
>>> ~Ashish
>>>
>>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Yes, you should make your modification on each region server, since
>>>> this is a server-side configuration.
>>>>
>>>>
>>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>>> gsvijayraajaa@gmail.com> wrote:
>>>>
>>>>> Hi Xue,
>>>>>
>>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>>> . Also increased the Region server heap space memory . The
>>>>> change didn't get reflected and I still get the error with an indication
>>>>> that "global pool of 319507660 bytes" is present. Should I modify the
>>>>> hbase-site.xml in every region server or just the file present in the class
>>>>> path of Phoenix client?
>>>>>
>>>>> Regards,
>>>>> Vijay Raajaa G S
>>>>>
>>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Vijay,
>>>>>>
>>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>>> joining the other two tables at the same time, which means the region
>>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>>> and you also need to expect some memory expansion for java objects.
>>>>>>
>>>>>> Do you mean that after you had modified the parameters you mentioned,
>>>>>> you were still getting the same error message with exactly the same numbers
>>>>>> as "global pool of 319507660 bytes"? Did you make sure that the
>>>>>> parameters actually took effect after modification?
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>>     I am trying to do a join of three tables usng the following
>>>>>>> query:
>>>>>>>
>>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>>> c.c_first_name;*
>>>>>>>
>>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>>
>>>>>>> *I get the following error:*
>>>>>>>
>>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>>> execution.
>>>>>>> at
>>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>>> at
>>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>>> at
>>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>>> at
>>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>>> at
>>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>>> Caused by: java.sql.SQLException:
>>>>>>> java.util.concurrent.ExecutionException:
>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>> at
>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>>> at
>>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>>> at
>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>>> at
>>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>>> at
>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>> at
>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>>> at
>>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>>> ... 8 more
>>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>>> at
>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>>> at
>>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>>> ... 5 more
>>>>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>>>>>> Failed after attempts=14, exceptions:
>>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>>> java.io.IOException: java.io.IOException:
>>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>>
>>>>>>> at
>>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>>> at
>>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>>> ... 8 more
>>>>>>>
>>>>>>> Trials:
>>>>>>>
>>>>>>> I tried to increase the Region Server Heap space ,
>>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>>
>>>>>>> I am not able to increase the global memory .
>>>>>>>
>>>>>>> Regards,
>>>>>>> Vijay Raajaa
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Maryann
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by ashish tapdiya <as...@gmail.com>.
I have tried that as well...but "phoenix.query.maxServerCacheBytes" remains
the default value of 100 MB. I get to see it when join fails.

Thanks,
~Ashish

On Fri, Sep 26, 2014 at 8:02 PM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Ashish,
>
> The global cache size is set to either "phoenix.query.maxServerCacheBytes"
> or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
> *smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
> instead, which is recommended, and see how it goes.
>
>
> Thanks,
> Maryann
>
> On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <as...@gmail.com>
> wrote:
>
>> Hi Maryann,
>>
>> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
>> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and
>> server hbase-site.xml's. However, it does not take effect.
>>
>> Phoenix 3.1
>> HBase .94
>>
>> Thanks,
>> ~Ashish
>>
>> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Yes, you should make your modification on each region server, since this
>>> is a server-side configuration.
>>>
>>>
>>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>>> gsvijayraajaa@gmail.com> wrote:
>>>
>>>> Hi Xue,
>>>>
>>>>           Thanks for replying. I did modify the hbase-site.xml by
>>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>>> . Also increased the Region server heap space memory . The
>>>> change didn't get reflected and I still get the error with an indication
>>>> that "global pool of 319507660 bytes" is present. Should I modify the
>>>> hbase-site.xml in every region server or just the file present in the class
>>>> path of Phoenix client?
>>>>
>>>> Regards,
>>>> Vijay Raajaa G S
>>>>
>>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Vijay,
>>>>>
>>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>>> joining the other two tables at the same time, which means the region
>>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>>> and you also need to expect some memory expansion for java objects.
>>>>>
>>>>> Do you mean that after you had modified the parameters you mentioned,
>>>>> you were still getting the same error message with exactly the same numbers
>>>>> as "global pool of 319507660 bytes"? Did you make sure that the
>>>>> parameters actually took effect after modification?
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>>> gsvijayraajaa@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>     I am trying to do a join of three tables usng the following query:
>>>>>>
>>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>>> c.c_first_name;*
>>>>>>
>>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>>
>>>>>> *I get the following error:*
>>>>>>
>>>>>> ./psql.py 10.10.5.55 test.sql
>>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>>> execution.
>>>>>> at
>>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>>> at
>>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>>> at
>>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>>> at
>>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>>> at
>>>>>> org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>>> Caused by: java.sql.SQLException:
>>>>>> java.util.concurrent.ExecutionException:
>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>> at
>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>>> at
>>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>>> at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>>> at
>>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>>> at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>> at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>>> at
>>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>>> ... 8 more
>>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>>> at
>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>>> at
>>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>>> ... 5 more
>>>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>>>>> Failed after attempts=14, exceptions:
>>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>>> java.io.IOException: java.io.IOException:
>>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>>
>>>>>> at
>>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>>> at
>>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>>> ... 8 more
>>>>>>
>>>>>> Trials:
>>>>>>
>>>>>> I tried to increase the Region Server Heap space ,
>>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>>
>>>>>> I am not able to increase the global memory .
>>>>>>
>>>>>> Regards,
>>>>>> Vijay Raajaa
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Maryann
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Ashish,

The global cache size is set to either "phoenix.query.maxServerCacheBytes"
or "phoenix.query.maxGlobalMemoryPercentage * heapSize", whichever is
*smaller*. You can try setting "phoenix.query.maxGlobalMemoryPercentage"
instead, which is recommended, and see how it goes.


Thanks,
Maryann

On Fri, Sep 26, 2014 at 5:37 PM, ashish tapdiya <as...@gmail.com>
wrote:

> Hi Maryann,
>
> I am having the same issue where star join is failing with MaxServerCacheSizeExceededException.
> I set phoenix.query.maxServerCacheBytes to 1 GB both in client and server
> hbase-site.xml's. However, it does not take effect.
>
> Phoenix 3.1
> HBase .94
>
> Thanks,
> ~Ashish
>
> On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com>
> wrote:
>
>> Yes, you should make your modification on each region server, since this
>> is a server-side configuration.
>>
>>
>> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <
>> gsvijayraajaa@gmail.com> wrote:
>>
>>> Hi Xue,
>>>
>>>           Thanks for replying. I did modify the hbase-site.xml by
>>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>>> . Also increased the Region server heap space memory . The
>>> change didn't get reflected and I still get the error with an indication
>>> that "global pool of 319507660 bytes" is present. Should I modify the
>>> hbase-site.xml in every region server or just the file present in the class
>>> path of Phoenix client?
>>>
>>> Regards,
>>> Vijay Raajaa G S
>>>
>>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>>> wrote:
>>>
>>>> Hi Vijay,
>>>>
>>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>>> joining the other two tables at the same time, which means the region
>>>> server memory for Phoenix should be large enough to hold 2 tables together
>>>> and you also need to expect some memory expansion for java objects.
>>>>
>>>> Do you mean that after you had modified the parameters you mentioned,
>>>> you were still getting the same error message with exactly the same numbers
>>>> as "global pool of 319507660 bytes"? Did you make sure that the
>>>> parameters actually took effect after modification?
>>>>
>>>>
>>>> Thanks,
>>>> Maryann
>>>>
>>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>>> gsvijayraajaa@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>     I am trying to do a join of three tables usng the following query:
>>>>>
>>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>>> c.c_first_name;*
>>>>>
>>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>>
>>>>> *I get the following error:*
>>>>>
>>>>> ./psql.py 10.10.5.55 test.sql
>>>>> java.sql.SQLException: Encountered exception in hash plan [0]
>>>>> execution.
>>>>> at
>>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>>> at
>>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>>> at
>>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>>> at
>>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>>> Caused by: java.sql.SQLException:
>>>>> java.util.concurrent.ExecutionException:
>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>> at
>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>>> at
>>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>>> at
>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>>> at
>>>>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>> at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>> java.lang.reflect.UndeclaredThrowableException
>>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>>> at
>>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>>> ... 8 more
>>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>>> at $Proxy10.addServerCache(Unknown Source)
>>>>> at
>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>>> at
>>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>>> ... 5 more
>>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>>>> Failed after attempts=14, exceptions:
>>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>>> java.io.IOException: java.io.IOException:
>>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>>
>>>>> at
>>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>>> at
>>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>>> ... 8 more
>>>>>
>>>>> Trials:
>>>>>
>>>>> I tried to increase the Region Server Heap space ,
>>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>>
>>>>> I am not able to increase the global memory .
>>>>>
>>>>> Regards,
>>>>> Vijay Raajaa
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Maryann
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by ashish tapdiya <as...@gmail.com>.
Hi Maryann,

I am having the same issue where star join is failing with
MaxServerCacheSizeExceededException.
I set phoenix.query.maxServerCacheBytes to 1 GB both in client and server
hbase-site.xml's. However, it does not take effect.

Phoenix 3.1
HBase .94

Thanks,
~Ashish

On Fri, Sep 26, 2014 at 2:56 PM, Maryann Xue <ma...@gmail.com> wrote:

> Yes, you should make your modification on each region server, since this
> is a server-side configuration.
>
>
> On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com
> > wrote:
>
>> Hi Xue,
>>
>>           Thanks for replying. I did modify the hbase-site.xml by
>> increasing the default value of phoenix.query.maxGlobalMemoryPercentage
>> . Also increased the Region server heap space memory . The
>> change didn't get reflected and I still get the error with an indication
>> that "global pool of 319507660 bytes" is present. Should I modify the
>> hbase-site.xml in every region server or just the file present in the class
>> path of Phoenix client?
>>
>> Regards,
>> Vijay Raajaa G S
>>
>> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
>> wrote:
>>
>>> Hi Vijay,
>>>
>>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>>> joining the other two tables at the same time, which means the region
>>> server memory for Phoenix should be large enough to hold 2 tables together
>>> and you also need to expect some memory expansion for java objects.
>>>
>>> Do you mean that after you had modified the parameters you mentioned,
>>> you were still getting the same error message with exactly the same numbers
>>> as "global pool of 319507660 bytes"? Did you make sure that the
>>> parameters actually took effect after modification?
>>>
>>>
>>> Thanks,
>>> Maryann
>>>
>>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>>> gsvijayraajaa@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>>     I am trying to do a join of three tables usng the following query:
>>>>
>>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>>> c.c_first_name;*
>>>>
>>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>>
>>>> *I get the following error:*
>>>>
>>>> ./psql.py 10.10.5.55 test.sql
>>>> java.sql.SQLException: Encountered exception in hash plan [0] execution.
>>>> at
>>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>>> at
>>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>>> at
>>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>>> at
>>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>>> Caused by: java.sql.SQLException:
>>>> java.util.concurrent.ExecutionException:
>>>> java.lang.reflect.UndeclaredThrowableException
>>>> at
>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>>> at
>>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>> at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.util.concurrent.ExecutionException:
>>>> java.lang.reflect.UndeclaredThrowableException
>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>>> at
>>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>>> ... 8 more
>>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>>> at $Proxy10.addServerCache(Unknown Source)
>>>> at
>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>>> at
>>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>>> ... 5 more
>>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>>> Failed after attempts=14, exceptions:
>>>> Tue Sep 23 00:25:53 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:26:02 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:26:18 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:26:43 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:27:01 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:27:10 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:27:24 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:28:16 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:28:35 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:29:09 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:30:16 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:31:22 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:32:29 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>> Tue Sep 23 00:33:35 CDT 2014,
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>>> java.io.IOException: java.io.IOException:
>>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>>
>>>> at
>>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>>> at
>>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>>> ... 8 more
>>>>
>>>> Trials:
>>>>
>>>> I tried to increase the Region Server Heap space ,
>>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>>
>>>> I am not able to increase the global memory .
>>>>
>>>> Regards,
>>>> Vijay Raajaa
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Maryann
>>>
>>
>>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Yes, you should make your modification on each region server, since this is
a server-side configuration.


On Thu, Sep 25, 2014 at 4:15 AM, G.S.Vijay Raajaa <gs...@gmail.com>
wrote:

> Hi Xue,
>
>           Thanks for replying. I did modify the hbase-site.xml by
> increasing the default value of phoenix.query.maxGlobalMemoryPercentage .
> Also increased the Region server heap space memory . The
> change didn't get reflected and I still get the error with an indication
> that "global pool of 319507660 bytes" is present. Should I modify the
> hbase-site.xml in every region server or just the file present in the class
> path of Phoenix client?
>
> Regards,
> Vijay Raajaa G S
>
> On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com>
> wrote:
>
>> Hi Vijay,
>>
>> I think here the query plan is scanning table *CUSTOMER_30000 *while
>> joining the other two tables at the same time, which means the region
>> server memory for Phoenix should be large enough to hold 2 tables together
>> and you also need to expect some memory expansion for java objects.
>>
>> Do you mean that after you had modified the parameters you mentioned, you
>> were still getting the same error message with exactly the same numbers as "global
>> pool of 319507660 bytes"? Did you make sure that the parameters actually
>> took effect after modification?
>>
>>
>> Thanks,
>> Maryann
>>
>> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <
>> gsvijayraajaa@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>     I am trying to do a join of three tables usng the following query:
>>>
>>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>>> c.c_first_name;*
>>>
>>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>>
>>> *I get the following error:*
>>>
>>> ./psql.py 10.10.5.55 test.sql
>>> java.sql.SQLException: Encountered exception in hash plan [0] execution.
>>> at
>>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>>> at
>>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>>> at
>>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>>> at
>>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>>> Caused by: java.sql.SQLException:
>>> java.util.concurrent.ExecutionException:
>>> java.lang.reflect.UndeclaredThrowableException
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>>> at
>>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.util.concurrent.ExecutionException:
>>> java.lang.reflect.UndeclaredThrowableException
>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>>> ... 8 more
>>> Caused by: java.lang.reflect.UndeclaredThrowableException
>>> at $Proxy10.addServerCache(Unknown Source)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>>> at
>>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>>> ... 5 more
>>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>>> Failed after attempts=14, exceptions:
>>> Tue Sep 23 00:25:53 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:02 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:18 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:26:43 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:01 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:10 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:27:24 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:28:16 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:28:35 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:29:09 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:30:16 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:31:22 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:32:29 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>> Tue Sep 23 00:33:35 CDT 2014,
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>>> java.io.IOException: java.io.IOException:
>>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>>
>>> at
>>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>>> at
>>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>>> ... 8 more
>>>
>>> Trials:
>>>
>>> I tried to increase the Region Server Heap space ,
>>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>>
>>> I am not able to increase the global memory .
>>>
>>> Regards,
>>> Vijay Raajaa
>>>
>>
>>
>>
>> --
>> Thanks,
>> Maryann
>>
>
>


-- 
Thanks,
Maryann

Re: Getting InsufficientMemoryException

Posted by "G.S.Vijay Raajaa" <gs...@gmail.com>.
Hi Xue,

          Thanks for replying. I did modify the hbase-site.xml by
increasing the default value of phoenix.query.maxGlobalMemoryPercentage .
Also increased the Region server heap space memory . The
change didn't get reflected and I still get the error with an indication
that "global pool of 319507660 bytes" is present. Should I modify the
hbase-site.xml in every region server or just the file present in the class
path of Phoenix client?

Regards,
Vijay Raajaa G S

On Thu, Sep 25, 2014 at 1:47 AM, Maryann Xue <ma...@gmail.com> wrote:

> Hi Vijay,
>
> I think here the query plan is scanning table *CUSTOMER_30000 *while
> joining the other two tables at the same time, which means the region
> server memory for Phoenix should be large enough to hold 2 tables together
> and you also need to expect some memory expansion for java objects.
>
> Do you mean that after you had modified the parameters you mentioned, you
> were still getting the same error message with exactly the same numbers as "global
> pool of 319507660 bytes"? Did you make sure that the parameters actually
> took effect after modification?
>
>
> Thanks,
> Maryann
>
> On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <gsvijayraajaa@gmail.com
> > wrote:
>
>> Hi,
>>
>>     I am trying to do a join of three tables usng the following query:
>>
>> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
>> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
>> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
>> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
>> c.c_first_name;*
>>
>> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>>
>> *I get the following error:*
>>
>> ./psql.py 10.10.5.55 test.sql
>> java.sql.SQLException: Encountered exception in hash plan [0] execution.
>> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
>> at
>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
>> at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
>> at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
>> at
>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
>> at
>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
>> Caused by: java.sql.SQLException:
>> java.util.concurrent.ExecutionException:
>> java.lang.reflect.UndeclaredThrowableException
>> at
>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
>> at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.util.concurrent.ExecutionException:
>> java.lang.reflect.UndeclaredThrowableException
>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
>> at
>> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
>> ... 8 more
>> Caused by: java.lang.reflect.UndeclaredThrowableException
>> at $Proxy10.addServerCache(Unknown Source)
>> at
>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
>> at
>> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
>> ... 5 more
>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>> Failed after attempts=14, exceptions:
>> Tue Sep 23 00:25:53 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:26:02 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:26:18 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:26:43 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:27:01 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:27:10 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:27:24 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:28:16 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:28:35 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:29:09 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:30:16 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:31:22 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:32:29 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>> Tue Sep 23 00:33:35 CDT 2014,
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
>> java.io.IOException: java.io.IOException:
>> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
>> 446623727 bytes is larger than global pool of 319507660 bytes.
>>
>> at
>> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
>> at
>> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
>> ... 8 more
>>
>> Trials:
>>
>> I tried to increase the Region Server Heap space ,
>> modified phoenix.query.maxGlobalMemoryPercentage as well.
>>
>> I am not able to increase the global memory .
>>
>> Regards,
>> Vijay Raajaa
>>
>
>
>
> --
> Thanks,
> Maryann
>

Re: Getting InsufficientMemoryException

Posted by Maryann Xue <ma...@gmail.com>.
Hi Vijay,

I think here the query plan is scanning table *CUSTOMER_30000 *while
joining the other two tables at the same time, which means the region
server memory for Phoenix should be large enough to hold 2 tables together
and you also need to expect some memory expansion for java objects.

Do you mean that after you had modified the parameters you mentioned, you
were still getting the same error message with exactly the same
numbers as "global
pool of 319507660 bytes"? Did you make sure that the parameters actually
took effect after modification?


Thanks,
Maryann

On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa <gs...@gmail.com>
wrote:

> Hi,
>
>     I am trying to do a join of three tables usng the following query:
>
> *select c.c_first_name, ca.ca_city, cd.cd_education_status from
> CUSTOMER_30000 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
> cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
> ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
> c.c_first_name;*
>
> *The size of CUSTOMER_30000 is 4.1 GB with 30million records.*
>
> *I get the following error:*
>
> ./psql.py 10.10.5.55 test.sql
> java.sql.SQLException: Encountered exception in hash plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
> at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
> at
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
> at
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
> at
> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
> at
> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
> Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException:
> java.lang.reflect.UndeclaredThrowableException
> at
> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
> at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException:
> java.lang.reflect.UndeclaredThrowableException
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
> at
> org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
> ... 8 more
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at $Proxy10.addServerCache(Unknown Source)
> at
> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
> at
> org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
> ... 5 more
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Failed after attempts=14, exceptions:
> Tue Sep 23 00:25:53 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:26:02 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:26:18 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:26:43 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:27:01 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:27:10 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:27:24 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:28:16 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:28:35 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:29:09 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:30:16 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:31:22 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:32:29 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
> Tue Sep 23 00:33:35 CDT 2014,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
> java.io.IOException: java.io.IOException:
> org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
> 446623727 bytes is larger than global pool of 319507660 bytes.
>
> at
> org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:187)
> at
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker.invoke(ExecRPCInvoker.java:79)
> ... 8 more
>
> Trials:
>
> I tried to increase the Region Server Heap space ,
> modified phoenix.query.maxGlobalMemoryPercentage as well.
>
> I am not able to increase the global memory .
>
> Regards,
> Vijay Raajaa
>



-- 
Thanks,
Maryann