You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by condor join <sp...@outlook.com> on 2016/05/30 02:17:11 UTC

答复: G1 GC takes too much time

The follwing are the parameters:
-XX:+UseG1GC
-XX:+UnlockDiagnostivVMOptions
-XX:G1SummarizeConcMark
-XX:InitiatingHeapOccupancyPercent=35
spark.executor.memory=4G

________________________________
发件人: Ted Yu <yu...@gmail.com>
发送时间: 2016年5月30日 9:47:05
收件人: condor join
抄送: user@spark.apache.org
主题: Re: G1 GC takes too much time

bq. It happens during the Reduce majority.

Did the above refer to reduce operation ?

Can you share your G1GC parameters (and heap size for workers) ?

Thanks

On Sun, May 29, 2016 at 6:15 PM, condor join <sp...@outlook.com>> wrote:
Hi,
my spark application failed due to take too much time during GC. Looking at the logs I found these things:
1.there are Young GC takes too much time,and not found Full GC happen this;
2.The time takes too much during the object copy;
3.It happened  more easily when there were not enough resources;
4.It happens during the Reduce majority.

have anyone met the same question?
thanks



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org<ma...@spark.apache.org>
For additional commands, e-mail: user-help@spark.apache.org<ma...@spark.apache.org>


Re: 答复: G1 GC takes too much time

Posted by Ted Yu <yu...@gmail.com>.
Please consider reading G1GC tuning guide(s).
Here is an example:

http://product.hubspot.com/blog/g1gc-tuning-your-hbase-cluster

On Sun, May 29, 2016 at 7:17 PM, condor join <sp...@outlook.com>
wrote:

> The follwing are the parameters:
> -XX:+UseG1GC
> -XX:+UnlockDiagnostivVMOptions
> -XX:G1SummarizeConcMark
> -XX:InitiatingHeapOccupancyPercent=35
> spark.executor.memory=4G
>
> ------------------------------
> *发件人:* Ted Yu <yu...@gmail.com>
> *发送时间:* 2016年5月30日 9:47:05
> *收件人:* condor join
> *抄送:* user@spark.apache.org
> *主题:* Re: G1 GC takes too much time
>
> bq. It happens during the Reduce majority.
>
> Did the above refer to reduce operation ?
>
> Can you share your G1GC parameters (and heap size for workers) ?
>
> Thanks
>
> On Sun, May 29, 2016 at 6:15 PM, condor join <sp...@outlook.com>
> wrote:
>
>> Hi,
>> my spark application failed due to take too much time during GC. Looking
>> at the logs I found these things:
>> 1.there are Young GC takes too much time,and not found Full GC happen
>> this;
>> 2.The time takes too much during the object copy;
>> 3.It happened  more easily when there were not enough resources;
>> 4.It happens during the Reduce majority.
>>
>> have anyone met the same question?
>> thanks
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>
>

回复:答复: G1 GC takes too much time

Posted by Sea <26...@qq.com>.
Yes, It seems like that CMS is better. I have tried G1 as databricks' blog recommended, but it's too slow.




------------------ 原始邮件 ------------------
发件人: "condor join";<sp...@outlook.com>;
发送时间: 2016年5月30日(星期一) 上午10:17
收件人: "Ted Yu"<yu...@gmail.com>; 
抄送: "user@spark.apache.org"<us...@spark.apache.org>; 
主题: 答复: G1 GC takes too much time



  The follwing are the parameters:
 -XX:+UseG1GC     -XX:+UnlockDiagnostivVMOptions -XX:G1SummarizeConcMark
 -XX:InitiatingHeapOccupancyPercent=35
 
 spark.executor.memory=4G
 
 
 
 
 发件人: Ted Yu <yu...@gmail.com>
 发送时间: 2016年5月30日 9:47:05
 收件人: condor join
 抄送: user@spark.apache.org
 主题: Re: G1 GC takes too much time  
 
  bq. It happens during the Reduce majority. 
 
 Did the above refer to reduce operation ?
 
 
 Can you share your G1GC parameters (and heap size for workers) ?
 
 
 Thanks
 
 
 On Sun, May 29, 2016 at 6:15 PM, condor join  <sp...@outlook.com> wrote:
    Hi, my spark application failed due to take too much time during GC. Looking at the logs I found these things:
 1.there are Young GC takes too much time,and not found Full GC happen this;
 2.The time takes too much during the object copy;
 3.It happened  more easily when there were not enough resources;
 4.It happens during the Reduce majority.
 
 
 have anyone met the same question?
 thanks
 
 
 
 
 
 
 ---------------------------------------------------------------------
 To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
 For additional commands, e-mail: user-help@spark.apache.org