You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Nanda <tn...@gmail.com> on 2016/02/10 13:37:02 UTC

org.apache.phoenix.join.MaxServerCacheSizeExceededException

Hi ,

I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
exception,

Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
execution.
        at
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
~[nvp-data-access-1.0-SNAPSHOT.jar:na]
        at
com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
~[nvp-data-access-1.0-SNAPSHOT.jar:na]
        ... 75 common frames omitted
Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
Size of hash cache (104857651 bytes) exceeds the maximum allowed size
(104857600 bytes)
        at
org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_40]
        at
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_40]


Below are params i am using,

server side properties:
phoenix.coprocessor.maxServerCacheTimeToLiveMs=180000
phoenix.groupby.maxCacheSize=1572864000
phoenix.query.maxGlobalMemoryPercentage=60
phoenix.query.maxGlobalMemorySize=4096000000
phoenix.stats.guidepost.width=524288000


client side properties are:
hbase.client.scanner.timeout.period=180000
phoenix.query.spoolThresholdBytes=1048576000
phoenix.query.timeoutMs=180000
phoenix.query.threadPoolSize=240
phoenix.query.maxGlobalMemoryPercentage=60
phoenix.query.maxServerCacheBytes=1048576810


and my hbase heap is set to 4GB

Is there som property i need to set explicitly for this.

Thanks,
Nanda

Re: org.apache.phoenix.join.MaxServerCacheSizeExceededException

Posted by Nanda <tn...@gmail.com>.
I already override the property,  but still it takes the default value,
because of which none of my joins are working.

TIA
NANDA
On Feb 10, 2016 8:52 PM, "rafa" <ra...@gmail.com> wrote:

> Hi Nanda,
>
> It seems the server is taking the default value for
> phoenix.query.maxServerCacheBytes
>
> https://phoenix.apache.org/tuning.html
>
> phoenix.query.maxServerCacheBytes
>
>    - Maximum size (in bytes) of the raw results of a relation before
>    being compressed and sent over to the region servers.
>    - Attempting to serializing the raw results of a relation with a size
>    bigger than this setting will result in a
>    MaxServerCacheSizeExceededException.
>    - *Default: 104,857,600*
>
> Regards,
>
> rafa
>
>
> On Wed, Feb 10, 2016 at 1:37 PM, Nanda <tn...@gmail.com> wrote:
>
>> Hi ,
>>
>> I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
>> exception,
>>
>> Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
>> execution.
>>         at
>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
>> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>>         at
>> com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
>> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>>         ... 75 common frames omitted
>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>> Size of hash cache (104857651 bytes) exceeds the maximum allowed size
>> (104857600 bytes)
>>         at
>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> ~[na:1.8.0_40]
>>         at
>> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> [na:1.8.0_40]
>>
>>
>> Below are params i am using,
>>
>> server side properties:
>> phoenix.coprocessor.maxServerCacheTimeToLiveMs=180000
>> phoenix.groupby.maxCacheSize=1572864000
>> phoenix.query.maxGlobalMemoryPercentage=60
>> phoenix.query.maxGlobalMemorySize=4096000000
>> phoenix.stats.guidepost.width=524288000
>>
>>
>> client side properties are:
>> hbase.client.scanner.timeout.period=180000
>> phoenix.query.spoolThresholdBytes=1048576000
>> phoenix.query.timeoutMs=180000
>> phoenix.query.threadPoolSize=240
>> phoenix.query.maxGlobalMemoryPercentage=60
>> phoenix.query.maxServerCacheBytes=1048576810
>>
>>
>> and my hbase heap is set to 4GB
>>
>> Is there som property i need to set explicitly for this.
>>
>> Thanks,
>> Nanda
>>
>>
>

Re: org.apache.phoenix.join.MaxServerCacheSizeExceededException

Posted by rafa <ra...@gmail.com>.
Hi Nanda,

It seems the server is taking the default value for
phoenix.query.maxServerCacheBytes

https://phoenix.apache.org/tuning.html

phoenix.query.maxServerCacheBytes

   - Maximum size (in bytes) of the raw results of a relation before being
   compressed and sent over to the region servers.
   - Attempting to serializing the raw results of a relation with a size
   bigger than this setting will result in a
   MaxServerCacheSizeExceededException.
   - *Default: 104,857,600*

Regards,

rafa


On Wed, Feb 10, 2016 at 1:37 PM, Nanda <tn...@gmail.com> wrote:

> Hi ,
>
> I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
> exception,
>
> Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
> execution.
>         at
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>         at
> com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>         ... 75 common frames omitted
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
> Size of hash cache (104857651 bytes) exceeds the maximum allowed size
> (104857600 bytes)
>         at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_40]
>         at
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_40]
>
>
> Below are params i am using,
>
> server side properties:
> phoenix.coprocessor.maxServerCacheTimeToLiveMs=180000
> phoenix.groupby.maxCacheSize=1572864000
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxGlobalMemorySize=4096000000
> phoenix.stats.guidepost.width=524288000
>
>
> client side properties are:
> hbase.client.scanner.timeout.period=180000
> phoenix.query.spoolThresholdBytes=1048576000
> phoenix.query.timeoutMs=180000
> phoenix.query.threadPoolSize=240
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxServerCacheBytes=1048576810
>
>
> and my hbase heap is set to 4GB
>
> Is there som property i need to set explicitly for this.
>
> Thanks,
> Nanda
>
>