You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by "angers.zhu" <an...@gmail.com> on 2020/03/23 11:21:30 UTC

Spark Thrift Server java vm problem need help

Hi developers,

  

 These day I meet a strange problem and I can’t find why

  

When I start a spark thrift server with  spark.driver.memory 64g, then use
jdk8/bin/jinfo pid to see vm flags got below information,

In 64g vm, UseCompressedOops should be closed by default, why spark thrift
server is -XX: +UseCompressedOops

  

    
    
    Non-default VM flags: -XX:CICompilerCount=15 -XX:-CMSClassUnloadingEnabled -XX:CMSFullGCsBeforeCompaction=0 -XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled -XX:-ClassUnloading -XX:+DisableExplicitGC -XX:ErrorFile=null -XX:-ExplicitGCInvokesConcurrentAndUnloadsClasses -XX:InitialHeapSize=2116026368 -XX:+ManagementServer -XX:MaxDirectMemorySize=8589934592 -XX:MaxHeapSize=6442450944 -XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=705298432 -XX:OldPLABSize=16 -XX:OldSize=1410727936 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:-TraceClassUnloading -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseFastUnorderedTimeStamps -XX:+UseParNewGC
    Command line:  -Xmx6g -Djava.library.path=/home/hadoop/hadoop/lib/native -Djavax.security.auth.useSubjectCredsOnly=false -Dcom.sun.management.jmxremote.port=9021 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:MaxPermSize=1024m -XX:PermSize=256m -XX:MaxDirectMemorySize=8192m -XX:-TraceClassUnloading -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -Xnoclassgc -XX:+PrintGCDetails -XX:+PrintGCDateStamps 

  

Since I am not a professor in VM, hope for some help

  

  

[ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
---|---  
angers.zhu@gmail.com  
](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

  


Re: Spark Thrift Server java vm problem need help

Posted by Sean Owen <sr...@gmail.com>.
No, as I say, it seems to just generate a warning. OOPS can't be used with
>= 32GB heap, so it just isn't. That's why I am asking what the problem is.
Spark doesn't set this value as far as I can tell; maybe your env does.
This is in any event not a Spark issue per se.

On Mon, Mar 23, 2020 at 9:40 AM angers.zhu <an...@gmail.com> wrote:

> If -Xmx is bigger then 32g, vm will not to use  UseCompressedOops as
> default,
> We can see a case,
> If we set spark.driver.memory is 64g, set -XX:+UseCompressedOops in
> spark.executor.extralJavaOptions, and set SPARK_DAEMON_MEMORY = 6g,
> Use current code , vm will got command like with  -Xmx6g and -XX:+UseCompressedOops
> , then vm will be -XX:+UseCompressedOops  and use Oops compressed
>
> But since we set spark.driver.memory=64g, our jvm’s max heap size will be
> 64g,  but we will use compressed Oops ,  Wouldn't that be a problem?
>

Re: Spark Thrift Server java vm problem need help

Posted by "angers.zhu" <an...@gmail.com>.
If -Xmx is bigger then 32g, vm will not to use  UseCompressedOops as default,

We can see a case,

If we set spark.driver.memory is 64g, set -XX:+UseCompressedOops in
spark.executor.extralJavaOptions, and set SPARK_DAEMON_MEMORY = 6g,

Use current code , vm will got command like with  -Xmx6g and
-XX:+UseCompressedOops , then vm will be -XX:+UseCompressedOops  and use Oops
compressed

  

But since we set spark.driver.memory=64g, our jvm’s max heap size will be 64g,
but we will use compressed Oops ,  Wouldn't that be a problem?

  

[ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
---|---  
angers.zhu@gmail.com  
](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

  

On 03/23/2020 22:32,[Sean Owen<sr...@gmail.com>](mailto:srowen@gmail.com)
wrote:

> I'm still not sure if you are trying to enable it or disable it, and what
the issue is?

>

> There is no logic in Spark that sets or disables this flag that I can see.

>

>  
>

>

> On Mon, Mar 23, 2020 at 9:27 AM angers.zhu
<[angers.zhu@gmail.com](mailto:angers.zhu@gmail.com)> wrote:  
>

>

>> Hi Sean,

>>

>>  
>

>>

>> Yea,  I set  -XX:+UseCompressedOops in driver(you can see in command line)
and these days, we have more user and I set

>>

>> spark.driver.memory to 64g, in Non-default VM flags it should be
+XX:-UseCompressdOops , but it’s still

>>

>> +XX:-UseCompressdOops.

>>

>>  
>

>>

>> I have find the reason , in
SparkSubmitCommandBuilder.buildSparkSubmitCommand, have logic like below

>>

>>  
>

>>  
>>  
>>     if (isClientMode) {  
>>     >   // Figuring out where the memory value come from is a little tricky
due to precedence.  
>>     >   // Precedence is observed in the following order:  
>>     >   // - explicit configuration (setConf()), which also covers
--driver-memory cli argument.  
>>     >   // - properties file.  
>>     >   // - SPARK_DRIVER_MEMORY env variable  
>>     >   // - SPARK_MEM env variable  
>>     >   // - default value (1g)  
>>     >   // Take Thrift Server as daemon  
>>     >   String tsMemory =  
>>     >     isThriftServer(mainClass) ? System.getenv("SPARK_DAEMON_MEMORY")
: null;  
>>     >   String memory = firstNonEmpty(tsMemory,
config.get(SparkLauncher.DRIVER_MEMORY),  
>>     >     System.getenv("SPARK_DRIVER_MEMORY"), System.getenv("SPARK_MEM"),
DEFAULT_MEM);  
>>     >   cmd.add("-Xmx" + memory);  
>>     >   addOptionString(cmd, driverDefaultJavaOptions);  
>>     >   addOptionString(cmd, driverExtraJavaOptions);  
>>     >   mergeEnvPathList(env, getLibPathEnvName(),  
>>     >     config.get(SparkLauncher.DRIVER_EXTRA_LIBRARY_PATH));  
>>     > }

>>

>>  
>

>>

>> For Spark Thrift Server, use SPARK_DAEMON_MEMORY First, it’s really
reasonable, I am confused, if spark.driver.memory is bigger then 32g

>>

>> And SPARK_DAEMON_MEMORY is less then 32g, UseCompressedOops will also be
open, it’s right?

>>

>>  
>

>>

>> If we need to modify this logic for case >32g.

>>

>>  
>

>>

>>  
>

>>

>> By the way, I meet problem like
<https://issues.apache.org/jira/browse/SPARK-27097>, caused by these strange
case.

>>

>>  
>

>>

>> Thanks

>>

>>  
>

>>

>>  
>

>>

>> [ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
>> ---|---  
>> angers.zhu@gmail.com  
>> ](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

>>

>> 签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

>>

>>  
>

>>

>> On 03/23/2020 21:43,[Sean Owen<sr...@gmail.com>](mailto:srowen@gmail.com)
wrote:

>>

>>> I don't think Spark sets UseCompressedOops in any defaults; are you
setting it?

>>>

>>> It can't be used with heaps >= 32GB. It doesn't seem to cause an error if
you set it with large heaps, just a warning.

>>>

>>> What's the problem?

>>>

>>>  
>

>>>

>>> On Mon, Mar 23, 2020 at 6:21 AM angers.zhu
<[angers.zhu@gmail.com](mailto:angers.zhu@gmail.com)> wrote:  
>

>>>

>>>> Hi developers,

>>>>

>>>>  
>

>>>>

>>>>  These day I meet a strange problem and I can’t find why

>>>>

>>>>  
>

>>>>

>>>> When I start a spark thrift server with  spark.driver.memory 64g, then
use jdk8/bin/jinfo pid to see vm flags got below information,

>>>>

>>>> In 64g vm, UseCompressedOops should be closed by default, why spark
thrift server is -XX: +UseCompressedOops

>>>>

>>>>  
>

>>>>  
>>>>  
>>>>     Non-default VM flags: -XX:CICompilerCount=15
-XX:-CMSClassUnloadingEnabled -XX:CMSFullGCsBeforeCompaction=0
-XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled
-XX:-ClassUnloading -XX:+DisableExplicitGC -XX:ErrorFile=null
-XX:-ExplicitGCInvokesConcurrentAndUnloadsClasses
-XX:InitialHeapSize=2116026368 -XX:+ManagementServer
-XX:MaxDirectMemorySize=8589934592 -XX:MaxHeapSize=6442450944
-XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6
-XX:MinHeapDeltaBytes=196608 -XX:NewSize=705298432 -XX:OldPLABSize=16
-XX:OldSize=1410727936 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:-TraceClassUnloading
-XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
-XX:+UseFastUnorderedTimeStamps -XX:+UseParNewGC

>>>>     Command line:  -Xmx6g
-Djava.library.path=/home/hadoop/hadoop/lib/native
-Djavax.security.auth.useSubjectCredsOnly=false
-Dcom.sun.management.jmxremote.port=9021
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -XX:MaxPermSize=1024m
-XX:PermSize=256m -XX:MaxDirectMemorySize=8192m -XX:-TraceClassUnloading
-XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection
-XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled
-XX:+DisableExplicitGC -XX:+PrintTenuringDistribution
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75
-Xnoclassgc -XX:+PrintGCDetails -XX:+PrintGCDateStamps

>>>>

>>>>  
>

>>>>

>>>> Since I am not a professor in VM, hope for some help

>>>>

>>>>  
>

>>>>

>>>>  
>

>>>>

>>>> [ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
>>>> ---|---  
>>>> angers.zhu@gmail.com  
>>>> ](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

>>>>

>>>> 签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

>>>>

>>>>  
>

\--------------------------------------------------------------------- To
unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Spark Thrift Server java vm problem need help

Posted by Sean Owen <sr...@gmail.com>.
I'm still not sure if you are trying to enable it or disable it, and what
the issue is?
There is no logic in Spark that sets or disables this flag that I can see.

On Mon, Mar 23, 2020 at 9:27 AM angers.zhu <an...@gmail.com> wrote:

> Hi Sean,
>
> Yea,  I set  -XX:+UseCompressedOops in driver(you can see in command
> line) and these days, we have more user and I set
> spark.driver.memory to 64g, in Non-default VM flags it should be
> +XX:-UseCompressdOops , but it’s still
> +XX:-UseCompressdOops.
>
> I have find the reason , in SparkSubmitCommandBuilder.buildSparkSubmitCommand,
> have logic like below
>
> if (isClientMode) {
>   // Figuring out where the memory value come from is a little tricky due to precedence.
>   // Precedence is observed in the following order:
>   // - explicit configuration (setConf()), which also covers --driver-memory cli argument.
>   // - properties file.
>   // - SPARK_DRIVER_MEMORY env variable
>   // - SPARK_MEM env variable
>   // - default value (1g)
>   // Take Thrift Server as daemon
>   String tsMemory =
>     isThriftServer(mainClass) ? System.getenv("SPARK_DAEMON_MEMORY") : null;
>   String memory = firstNonEmpty(tsMemory, config.get(SparkLauncher.DRIVER_MEMORY),
>     System.getenv("SPARK_DRIVER_MEMORY"), System.getenv("SPARK_MEM"), DEFAULT_MEM);
>   cmd.add("-Xmx" + memory);
>   addOptionString(cmd, driverDefaultJavaOptions);
>   addOptionString(cmd, driverExtraJavaOptions);
>   mergeEnvPathList(env, getLibPathEnvName(),
>     config.get(SparkLauncher.DRIVER_EXTRA_LIBRARY_PATH));
> }
>
>
> For Spark Thrift Server, use SPARK_DAEMON_MEMORY First, it’s really
> reasonable, I am confused, if spark.driver.memory is bigger then 32g
> And SPARK_DAEMON_MEMORY is less then 32g, UseCompressedOops will also be
> open, it’s right?
>
> If we need to modify this logic for case >32g.
>
>
> By the way, I meet problem like
> https://issues.apache.org/jira/browse/SPARK-27097, caused by these
> strange case.
>
> Thanks
>
>
> angers.zhu
> angers.zhu@gmail.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> On 03/23/2020 21:43,Sean Owen<sr...@gmail.com> <sr...@gmail.com> wrote:
>
> I don't think Spark sets UseCompressedOops in any defaults; are you
> setting it?
> It can't be used with heaps >= 32GB. It doesn't seem to cause an error if
> you set it with large heaps, just a warning.
> What's the problem?
>
> On Mon, Mar 23, 2020 at 6:21 AM angers.zhu <an...@gmail.com> wrote:
>
>> Hi developers,
>>
>>  These day I meet a strange problem and I can’t find why
>>
>> When I start a spark thrift server with  spark.driver.memory 64g, then
>> use jdk8/bin/jinfo pid to see vm flags got below information,
>> In 64g vm, UseCompressedOops should be closed by default, why spark
>> thrift server is -XX: +UseCompressedOops
>>
>> Non-default VM flags: -XX:CICompilerCount=15 -XX:-CMSClassUnloadingEnabled -XX:CMSFullGCsBeforeCompaction=0 -XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled -XX:-ClassUnloading -XX:+DisableExplicitGC -XX:ErrorFile=null -XX:-ExplicitGCInvokesConcurrentAndUnloadsClasses -XX:InitialHeapSize=2116026368 -XX:+ManagementServer -XX:MaxDirectMemorySize=8589934592 -XX:MaxHeapSize=6442450944 -XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=705298432 -XX:OldPLABSize=16 -XX:OldSize=1410727936 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:-TraceClassUnloading -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseFastUnorderedTimeStamps -XX:+UseParNewGCCommand line:  -Xmx6g -Djava.library.path=/home/hadoop/hadoop/lib/native -Djavax.security.auth.useSubjectCredsOnly=false -Dcom.sun.management.jmxremote.port=9021 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:MaxPermSize=1024m -XX:PermSize=256m -XX:MaxDirectMemorySize=8192m -XX:-TraceClassUnloading -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -Xnoclassgc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
>>
>>
>> Since I am not a professor in VM, hope for some help
>>
>>
>> angers.zhu
>> angers.zhu@gmail.com
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>
>>

Re: Spark Thrift Server java vm problem need help

Posted by "angers.zhu" <an...@gmail.com>.
Hi Sean,

  

Yea,  I set  -XX:+UseCompressedOops in driver(you can see in command line) and
these days, we have more user and I set

spark.driver.memory to 64g, in Non-default VM flags it should be
+XX:-UseCompressdOops , but it’s still

+XX:-UseCompressdOops.

  

I have find the reason , in SparkSubmitCommandBuilder.buildSparkSubmitCommand,
have logic like below

  

    
    
    if (isClientMode) {  
      // Figuring out where the memory value come from is a little tricky due to precedence.  
      // Precedence is observed in the following order:  
      // - explicit configuration (setConf()), which also covers --driver-memory cli argument.  
      // - properties file.  
      // - SPARK_DRIVER_MEMORY env variable  
      // - SPARK_MEM env variable  
      // - default value (1g)  
      // Take Thrift Server as daemon  
      String tsMemory =  
        isThriftServer(mainClass) ? System.getenv("SPARK_DAEMON_MEMORY") : null;  
      String memory = firstNonEmpty(tsMemory, config.get(SparkLauncher.DRIVER_MEMORY),  
        System.getenv("SPARK_DRIVER_MEMORY"), System.getenv("SPARK_MEM"), DEFAULT_MEM);  
      cmd.add("-Xmx" + memory);  
      addOptionString(cmd, driverDefaultJavaOptions);  
      addOptionString(cmd, driverExtraJavaOptions);  
      mergeEnvPathList(env, getLibPathEnvName(),  
        config.get(SparkLauncher.DRIVER_EXTRA_LIBRARY_PATH));  
    }

  

For Spark Thrift Server, use SPARK_DAEMON_MEMORY First, it’s really
reasonable, I am confused, if spark.driver.memory is bigger then 32g

And SPARK_DAEMON_MEMORY is less then 32g, UseCompressedOops will also be open,
it’s right?

  

If we need to modify this logic for case >32g.

  

  

By the way, I meet problem like
<https://issues.apache.org/jira/browse/SPARK-27097>, caused by these strange
case.

  

Thanks

  

  

[ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
---|---  
angers.zhu@gmail.com  
](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

  

On 03/23/2020 21:43,[Sean Owen<sr...@gmail.com>](mailto:srowen@gmail.com)
wrote:

> I don't think Spark sets UseCompressedOops in any defaults; are you setting
it?

>

> It can't be used with heaps >= 32GB. It doesn't seem to cause an error if
you set it with large heaps, just a warning.

>

> What's the problem?

>

>  
>

>

> On Mon, Mar 23, 2020 at 6:21 AM angers.zhu
<[angers.zhu@gmail.com](mailto:angers.zhu@gmail.com)> wrote:  
>

>

>> Hi developers,

>>

>>  
>

>>

>>  These day I meet a strange problem and I can’t find why

>>

>>  
>

>>

>> When I start a spark thrift server with  spark.driver.memory 64g, then use
jdk8/bin/jinfo pid to see vm flags got below information,

>>

>> In 64g vm, UseCompressedOops should be closed by default, why spark thrift
server is -XX: +UseCompressedOops

>>

>>  
>

>>  
>>  
>>     Non-default VM flags: -XX:CICompilerCount=15
-XX:-CMSClassUnloadingEnabled -XX:CMSFullGCsBeforeCompaction=0
-XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled
-XX:-ClassUnloading -XX:+DisableExplicitGC -XX:ErrorFile=null
-XX:-ExplicitGCInvokesConcurrentAndUnloadsClasses
-XX:InitialHeapSize=2116026368 -XX:+ManagementServer
-XX:MaxDirectMemorySize=8589934592 -XX:MaxHeapSize=6442450944
-XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6
-XX:MinHeapDeltaBytes=196608 -XX:NewSize=705298432 -XX:OldPLABSize=16
-XX:OldSize=1410727936 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:-TraceClassUnloading
-XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
-XX:+UseFastUnorderedTimeStamps -XX:+UseParNewGC

>>     Command line:  -Xmx6g
-Djava.library.path=/home/hadoop/hadoop/lib/native
-Djavax.security.auth.useSubjectCredsOnly=false
-Dcom.sun.management.jmxremote.port=9021
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false -XX:MaxPermSize=1024m
-XX:PermSize=256m -XX:MaxDirectMemorySize=8192m -XX:-TraceClassUnloading
-XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection
-XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled
-XX:+DisableExplicitGC -XX:+PrintTenuringDistribution
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75
-Xnoclassgc -XX:+PrintGCDetails -XX:+PrintGCDateStamps

>>

>>  
>

>>

>> Since I am not a professor in VM, hope for some help

>>

>>  
>

>>

>>  
>

>>

>> [ ![](https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png) |
angers.zhu  
>> ---|---  
>> angers.zhu@gmail.com  
>> ](https://maas.mail.163.com/dashi-web-
extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-
online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D)

>>

>> 签名由 [网易邮箱大师](https://mail.163.com/dashi/dlpro.html?from=mail81) 定制

>>

>>  
>


Re: Spark Thrift Server java vm problem need help

Posted by Sean Owen <sr...@gmail.com>.
I don't think Spark sets UseCompressedOops in any defaults; are you setting
it?
It can't be used with heaps >= 32GB. It doesn't seem to cause an error if
you set it with large heaps, just a warning.
What's the problem?

On Mon, Mar 23, 2020 at 6:21 AM angers.zhu <an...@gmail.com> wrote:

> Hi developers,
>
>  These day I meet a strange problem and I can’t find why
>
> When I start a spark thrift server with  spark.driver.memory 64g, then
> use jdk8/bin/jinfo pid to see vm flags got below information,
> In 64g vm, UseCompressedOops should be closed by default, why spark
> thrift server is -XX: +UseCompressedOops
>
> Non-default VM flags: -XX:CICompilerCount=15 -XX:-CMSClassUnloadingEnabled -XX:CMSFullGCsBeforeCompaction=0 -XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled -XX:-ClassUnloading -XX:+DisableExplicitGC -XX:ErrorFile=null -XX:-ExplicitGCInvokesConcurrentAndUnloadsClasses -XX:InitialHeapSize=2116026368 -XX:+ManagementServer -XX:MaxDirectMemorySize=8589934592 -XX:MaxHeapSize=6442450944 -XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6 -XX:MinHeapDeltaBytes=196608 -XX:NewSize=705298432 -XX:OldPLABSize=16 -XX:OldSize=1410727936 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:-TraceClassUnloading -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseFastUnorderedTimeStamps -XX:+UseParNewGCCommand line:  -Xmx6g -Djava.library.path=/home/hadoop/hadoop/lib/native -Djavax.security.auth.useSubjectCredsOnly=false -Dcom.sun.management.jmxremote.port=9021 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:MaxPermSize=1024m -XX:PermSize=256m -XX:MaxDirectMemorySize=8192m -XX:-TraceClassUnloading -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -Xnoclassgc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
>
>
> Since I am not a professor in VM, hope for some help
>
>
> angers.zhu
> angers.zhu@gmail.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=angers.zhu&uid=angers.zhu%40gmail.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22angers.zhu%40gmail.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
>