You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Techienote com <te...@gmail.com> on 2013/04/16 07:34:50 UTC

ParNew promotion failed in verbose GC logs

Hi Team,

Recently I am seeing concurrent mode failure errors in my Verbose GC logs.
For the same I have set NewSize to 512MB, still I am seeing concurrent mode
failure in the Verbose GC logs

62230.611: [ParNew (promotion failed)
Desired survivor size 32768 bytes, new threshold 0 (max 0)
: 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent mode
failure)

Also Garbage collection is causing some large pauses. The largest pause was
121239 ms.

: 1255376K->215461K(2068480K), 121.1880176 secs]
1283830K->215461K(2097088K)Heap after gc invocations=12320:
 par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
0x6a400000)
  eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
  from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
  to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
 concurrent mark-sweep generation total 2068480K, used 215461K [0x6a400000,
0xe8800000, 0xe8800000)
 concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
0xede6c000, 0xf8800000)
}
, 121.2390524 secs]

Following is my JVM argument
-server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
-XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
-XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
-Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
-XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
-XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
-XX:+PrintGCDetails -XX:+PrintTenuringDistribution

Tomcat Version
6.0.36

JDK Version
Sun HotSpot 1.5.0.22

CPU
Number of Physical processor 1
Number of Virtual processor 7

RAM
6144MB

OS
SunOS 5.10 Generic_147440-09 sun4v sparc sun4v

Do you have any idea how to tune it further?



Regards,

Vidyadhar

Re: ParNew promotion failed in verbose GC logs

Posted by Pïd stèr <pi...@pidster.com>.
On 16 Apr 2013, at 09:23, Techienote com <te...@gmail.com> wrote:

> On Tue, Apr 16, 2013 at 12:33 PM, Pïd stèr <pi...@pidster.com> wrote:
>
>> On 16 Apr 2013, at 06:35, Techienote com <te...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> Recently I am seeing concurrent mode failure errors in my Verbose GC
>> logs.
>>> For the same I have set NewSize to 512MB, still I am seeing concurrent
>> mode
>>> failure in the Verbose GC logs
>>
>> Where have you set these JVM attributes and how are you starting Tomcat?
>>
>> Is the log below from after you set NewSize to 512 or before? It
>> doesn't look like it is set to me.
> After changing the NewSize to 512. Can you please confirm how you come to
> know that the size is not set.

The incomplete log below says "Eden space 28544k", this is somewhat
less than I'd expect for a NewSize of 512M.



>> What does your app do?
> This is document uploading and authorizing applicaton.
>
>
> When did this start happening, was it after a specific app update?
> I am seeing this after changing the default policy to CMS
>
>>
>> Have you observed load increasing due to user activity?
> Concurrent # of users are 12
>
> Regards,
> Vidyadhar
>
>>
>>
>> p
>
>
>
>>
>>
>>> 62230.611: [ParNew (promotion failed)
>>> Desired survivor size 32768 bytes, new threshold 0 (max 0)
>>> : 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent mode
>>> failure)
>>>
>>> Also Garbage collection is causing some large pauses. The largest pause
>> was
>>> 121239 ms.
>>>
>>> : 1255376K->215461K(2068480K), 121.1880176 secs]
>>> 1283830K->215461K(2097088K)Heap after gc invocations=12320:
>>> par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
>>> 0x6a400000)
>>> eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
>>> from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
>>> to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
>>> concurrent mark-sweep generation total 2068480K, used 215461K
>> [0x6a400000,
>>> 0xe8800000, 0xe8800000)
>>> concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
>>> 0xede6c000, 0xf8800000)
>>> }
>>> , 121.2390524 secs]
>>>
>>> Following is my JVM argument
>>> -server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
>>> -XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
>>> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
>>> -Dsun.rmi.dgc.client.gcInterval=3600000
>>> -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
>>> -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
>>> -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
>>> -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
>>>
>>> Tomcat Version
>>> 6.0.36
>>>
>>> JDK Version
>>> Sun HotSpot 1.5.0.22
>>>
>>> CPU
>>> Number of Physical processor 1
>>> Number of Virtual processor 7
>>>
>>> RAM
>>> 6144MB
>>>
>>> OS
>>> SunOS 5.10 Generic_147440-09 sun4v sparc sun4v
>>>
>>> Do you have any idea how to tune it further?
>>>
>>>
>>>
>>> Regards,
>>>
>>> Vidyadhar
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Vidyadhar,

On 4/17/13 2:22 PM, Techienote com wrote:
> We are in the plan of upgrading the tomcat with the JVM version. It
> is in process but before that we need to stablize it on Tomcat 6

Tomcat is definitely not the problem, here. You can run Tomcat 5.0 on
Java 6 or 7 just fine.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRb0/CAAoJEBzwKT+lPKRY56wQAK9oSYmOzub5dn+z9kDfRgtH
G8sPjdaelkGAIquNSHWrA93bAe3ZWBSD7g2JR+TooMiqA6Ie/LoYDBGiMrSnVsh3
C1NSiPgiF7pA6us1fVVWfWrgQ7GJITvKOlGgpcSKxefEp3HVajMeAytM8FcJY2Yk
hi+A5ecpBdx1ouUJHtzvbwWzbrLCRJcHplFh/o3LFDYZDEF5GQr2nNLvA20kYCg8
jj7zewsJc59ZQNV7ILijzt5PSzih6xsgDLVxZtF+Qu0hw1RsL7SGP8QtZhRWMTal
RqPG1cvlQqwjo45Z3BReMKXn+l0NJfLi+lRMD1WLPTQg1ZZeBf1TGRj6rOi+cqsW
tT/XOkiskqHVY3si91SM6WMQn5Rlq6PAHh9RA1KOD61MNkol0+iejOPoU59xhELD
yu0C4VWvDDXaOvN1i/yUkRRAQKlGJ0C2A72X7eI+tciHXr+eru1sRpeax+1599l3
tcM6zOrsF9C4b6UQUqbTHeQZ7oQ4TeWdWlh9lIz6KUbQIor6pzzwea55hFbig2zi
4YiNBSkPtgpxD97zsuEDkrvectj1m8AMMxw/7I177AexsI90RQNVdE/fEMP5/1gU
3O009+0N6QGkjPhqzCBwah19x95F7+6zFROeequyQsYnRFrw761RllRy7P7t2rNS
W/JW6pCsR+B0BWjDMiKU
=l87Q
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Techienote com <te...@gmail.com>.
Chris,
First of all thanks for the infor.
On Wed, Apr 17, 2013 at 11:31 PM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Vidyadhar,
>
> On 4/17/13 10:56 AM, Techienote com wrote:
> > Chris,
> >
> > On Wed, Apr 17, 2013 at 1:11 AM, Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> > Vidyadhar,
> >
> > On 4/16/13 1:14 PM, Techienote com wrote:
> >>>> With default setting we were getting frequent OOM errors.
> >>>> After analyzing the heap dump we found
> >>>> org.apache.poi.hssf.usermodel.HSSFSheet is accumulating more
> >>>> heap memory. As per the application development this is the
> >>>> normal behavior and have suggested to increase the maximum
> >>>> heap size to 2048MB
> >
> > So, you keep lots of spreadsheets in memory for some reason? I
> > can't imagine that loading a Microsoft Excel document into memory
> > and keeping it in what POI calls "horrible spreadsheet format" is
> > the best way to keep that information around. I suppose only /you/
> > know you requirements.
> >
> > Just how many spreadsheets do you need to keep in memory?
> >
> >> Out of 2048 MB, 1536 MB is getting used by HSSFSheet. I am saying
> >> it after seeing the heap dump. I am not a developer and I do have
> >> only basic knowledge about Java.
>
> ...but you should know how many spreadsheets you intend to have
> loaded. Is that a single spreadsheet taking-up 1.5GiB? I'm trying to
> find out why that object is in memory /at all/.
>

Every spread sheet is of 64MB. So around 24 spread sheets at a time.

>
> >>>> After increasing the max heap size we were seeing some large
> >>>> GC pauses for the same we tried to change the JVM policy to
> >>>> CMS and added following parameters
> >>>>
> >>>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
> >>>> -XX:+CMSParallelRemarkEnabled
> >
> > Did you enable verbose GC logging before/after you enabled those
> > options? Did it help anything? Do you have any idea *why* your GCs
> > were taking so long, or did you just Google for "java gc is taking
> > a long time" and enable those options because they were
> > "recommended" by someone?
> >
> >> I have enabled GC logging before doing any changes. After
> >> observing the GC i have changed the GC policy to CMS as it is
> >> best for throughput. Also after changing it to CMS pauses has
> >> been reduced.
>
> You might try reading about CMS "goals" of pause-time is a problem.
> Given that the stop-the-world pauses should be short, I'm curious as
> to why you are experiencing pauses at all. A GC run, even if it takes
> 2 minutes to complete, shouldn't stop-the-world for 2 whole minutes.
>

Agreed on this point but as per my observation pauses has been reduced
after implementing CMS.


>
> >>>> Since then the long pauses reduced from 112 seconds to 90
> >>>> seconds.
> >
> > Without seeing your data, I would guess that it's only a
> > coincidence that your pauses have decreased in duration: you have
> > likely not had any improvement by changing the GC configuration.
> >
> >
> >> Earlier user is complaining about Application slowness. After
> >> changing the algorithm we have not observed any slowness issues.
> >> Earlier at the time of slowness issue we have observed GC pauses.
> >> I am saying this because at the time of issue there were many
> >> Minor GC call have been observed in GC logs. Note I am not expert
> >> in this I am just saying it after seeing the data.
>
> Minor GCs should happen all the time: it's completely normal. They
> should also be very fast because they are only dealing with the young
> generation of objects. The tenured generations do take longer to
> collect as a) they are usually bigger and b) they usually have a
> greater percentage of objects surviving the GC operation.
>

Agree

>
> >>>> Also we have seeing regular permanent generation concurrent
> >>>> mark failure which got reduced after changing NewSize to
> >>>> 512MB.
> >
> > Well, the NewSize shouldn't have any bearing on anything happening
> > in PermGen, other than maybe allowing OutOfMemoryErrors to occur if
> > you overfill PermGen. But that's not happening, here.
> >
> >>>> -Dsun.rmi.dgc.client.gcInterval=3600000
> >>>> -Dsun.rmi.dgc.server.gcInterval=3600000
> >>>> -XX:+DisableExplicitGC
> >
> > If I understand correctly (and I don't claim to be a GC ergonomics
> > expert), those options are mutually-exclusive. Disabling explicit
> > GC should disable the RMI's use of .. explicit garbage-collection.
> > So, if you really are using RMI, disabling explicit
> > garbage-collection can ruin everything. [See
> >
> >
> http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#1.1.Other%20Considerations|outline
> >
> >
> ].
> >
> > Have you tried with a supported version of a Java VM?
> >
> >
> >> Will check the same and update accordingly.
>
> Before you spend a lot of time debugging GC on an old version of Java,
> I recommend that you test your app against a newer version. It may be
> that you are exercising a bug in the JVM and that Java 7, for
> instance, runs in an out-of-the-box configuration with none of the ill
> effects you describe above.
>

We are in the plan of upgrading the tomcat with the JVM version. It is in
process but before that we need to stablize it on Tomcat 6

>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJRbuNfAAoJEBzwKT+lPKRYdqwP+wTgXpfCxiN0d6Soo2FROC15
> MXG/PMmcfnQaBFnngpY5Gc580AHA+jLu39SvENzpfa2/OEKS38PUuV9swtkN1F5d
> s5a2JQh3lDfYYSukEuxiMgE8O/ZpzdgVzPJIfHvSlm//513q2yBi/YXTJ0/Bn7WP
> Q7FFs3kgV2W+FHOHVEErWA+exeY66iYcdBhFVflmS9Gu85c/IIujNfC19VUVFTYk
> NIaaJAm5YzyQiyOsSMFfhTNTH1bInhi1yyZMN4O8yClMJnRBcXP+mBOFAIZf5x3/
> mmgEvdidjl238+CJhpIOoC6Uv+cJovwl7fvR7+RUdEnoR694p0M9ykcAg0vCr0H6
> 9vSV9ykTaByi25U2bLEGhn2InvXqjcjUu8fVumpZGv68wA/+O13PYGlTPGywFVUL
> Adl4OJa/fpPqLaP/pHli41mObCf2BkGiNe4SFGg+Pkz2BPDjwz/7Tq2gXvcqWAGo
> 2XG0ANyw7WOD5+rTz4uT/ncrzX9WfH9nZHu9S1r30O2YA211n1O3vYAUWaMpOmg4
> hcfcSr1zCFgDACtkIe6+Ed6LMtjTXZbBnY28h8BmemfQYr9qoVAllG3zrc1/F5r/
> 9/YPrymipuPsiweFzlWAsukv5tpQKmPHYapfbRHw00D8ZTX+De1PKkwe4Ri1Z18B
> EVpqp++ckROrd+AN8Wkc
> =2ylH
>  -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Vidyadhar

Re: ParNew promotion failed in verbose GC logs

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Vidyadhar,

On 4/17/13 10:56 AM, Techienote com wrote:
> Chris,
> 
> On Wed, Apr 17, 2013 at 1:11 AM, Christopher Schultz < 
> chris@christopherschultz.net> wrote:
> 
> Vidyadhar,
> 
> On 4/16/13 1:14 PM, Techienote com wrote:
>>>> With default setting we were getting frequent OOM errors.
>>>> After analyzing the heap dump we found 
>>>> org.apache.poi.hssf.usermodel.HSSFSheet is accumulating more
>>>> heap memory. As per the application development this is the
>>>> normal behavior and have suggested to increase the maximum
>>>> heap size to 2048MB
> 
> So, you keep lots of spreadsheets in memory for some reason? I
> can't imagine that loading a Microsoft Excel document into memory
> and keeping it in what POI calls "horrible spreadsheet format" is
> the best way to keep that information around. I suppose only /you/
> know you requirements.
> 
> Just how many spreadsheets do you need to keep in memory?
> 
>> Out of 2048 MB, 1536 MB is getting used by HSSFSheet. I am saying
>> it after seeing the heap dump. I am not a developer and I do have
>> only basic knowledge about Java.

...but you should know how many spreadsheets you intend to have
loaded. Is that a single spreadsheet taking-up 1.5GiB? I'm trying to
find out why that object is in memory /at all/.

>>>> After increasing the max heap size we were seeing some large
>>>> GC pauses for the same we tried to change the JVM policy to
>>>> CMS and added following parameters
>>>> 
>>>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
>>>> -XX:+CMSParallelRemarkEnabled
> 
> Did you enable verbose GC logging before/after you enabled those 
> options? Did it help anything? Do you have any idea *why* your GCs 
> were taking so long, or did you just Google for "java gc is taking
> a long time" and enable those options because they were
> "recommended" by someone?
> 
>> I have enabled GC logging before doing any changes. After
>> observing the GC i have changed the GC policy to CMS as it is
>> best for throughput. Also after changing it to CMS pauses has
>> been reduced.

You might try reading about CMS "goals" of pause-time is a problem.
Given that the stop-the-world pauses should be short, I'm curious as
to why you are experiencing pauses at all. A GC run, even if it takes
2 minutes to complete, shouldn't stop-the-world for 2 whole minutes.

>>>> Since then the long pauses reduced from 112 seconds to 90
>>>> seconds.
> 
> Without seeing your data, I would guess that it's only a
> coincidence that your pauses have decreased in duration: you have
> likely not had any improvement by changing the GC configuration.
> 
> 
>> Earlier user is complaining about Application slowness. After
>> changing the algorithm we have not observed any slowness issues.
>> Earlier at the time of slowness issue we have observed GC pauses.
>> I am saying this because at the time of issue there were many
>> Minor GC call have been observed in GC logs. Note I am not expert
>> in this I am just saying it after seeing the data.

Minor GCs should happen all the time: it's completely normal. They
should also be very fast because they are only dealing with the young
generation of objects. The tenured generations do take longer to
collect as a) they are usually bigger and b) they usually have a
greater percentage of objects surviving the GC operation.

>>>> Also we have seeing regular permanent generation concurrent
>>>> mark failure which got reduced after changing NewSize to
>>>> 512MB.
> 
> Well, the NewSize shouldn't have any bearing on anything happening
> in PermGen, other than maybe allowing OutOfMemoryErrors to occur if
> you overfill PermGen. But that's not happening, here.
> 
>>>> -Dsun.rmi.dgc.client.gcInterval=3600000 
>>>> -Dsun.rmi.dgc.server.gcInterval=3600000
>>>> -XX:+DisableExplicitGC
> 
> If I understand correctly (and I don't claim to be a GC ergonomics 
> expert), those options are mutually-exclusive. Disabling explicit
> GC should disable the RMI's use of .. explicit garbage-collection.
> So, if you really are using RMI, disabling explicit
> garbage-collection can ruin everything. [See
> 
> http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#1.1.Other%20Considerations|outline
>
> 
].
> 
> Have you tried with a supported version of a Java VM?
> 
> 
>> Will check the same and update accordingly.

Before you spend a lot of time debugging GC on an old version of Java,
I recommend that you test your app against a newer version. It may be
that you are exercising a bug in the JVM and that Java 7, for
instance, runs in an out-of-the-box configuration with none of the ill
effects you describe above.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRbuNfAAoJEBzwKT+lPKRYdqwP+wTgXpfCxiN0d6Soo2FROC15
MXG/PMmcfnQaBFnngpY5Gc580AHA+jLu39SvENzpfa2/OEKS38PUuV9swtkN1F5d
s5a2JQh3lDfYYSukEuxiMgE8O/ZpzdgVzPJIfHvSlm//513q2yBi/YXTJ0/Bn7WP
Q7FFs3kgV2W+FHOHVEErWA+exeY66iYcdBhFVflmS9Gu85c/IIujNfC19VUVFTYk
NIaaJAm5YzyQiyOsSMFfhTNTH1bInhi1yyZMN4O8yClMJnRBcXP+mBOFAIZf5x3/
mmgEvdidjl238+CJhpIOoC6Uv+cJovwl7fvR7+RUdEnoR694p0M9ykcAg0vCr0H6
9vSV9ykTaByi25U2bLEGhn2InvXqjcjUu8fVumpZGv68wA/+O13PYGlTPGywFVUL
Adl4OJa/fpPqLaP/pHli41mObCf2BkGiNe4SFGg+Pkz2BPDjwz/7Tq2gXvcqWAGo
2XG0ANyw7WOD5+rTz4uT/ncrzX9WfH9nZHu9S1r30O2YA211n1O3vYAUWaMpOmg4
hcfcSr1zCFgDACtkIe6+Ed6LMtjTXZbBnY28h8BmemfQYr9qoVAllG3zrc1/F5r/
9/YPrymipuPsiweFzlWAsukv5tpQKmPHYapfbRHw00D8ZTX+De1PKkwe4Ri1Z18B
EVpqp++ckROrd+AN8Wkc
=2ylH
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Techienote com <te...@gmail.com>.
Chris,

On Wed, Apr 17, 2013 at 1:11 AM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Vidyadhar,
>
> On 4/16/13 1:14 PM, Techienote com wrote:
> > With default setting we were getting frequent OOM errors. After
> > analyzing the heap dump we found
> > org.apache.poi.hssf.usermodel.HSSFSheet is accumulating more heap
> > memory. As per the application development this is the normal
> > behavior and have suggested to increase the maximum heap size to
> > 2048MB
>
> So, you keep lots of spreadsheets in memory for some reason? I can't
> imagine that loading a Microsoft Excel document into memory and
> keeping it in what POI calls "horrible spreadsheet format" is the best
> way to keep that information around. I suppose only /you/ know you
> requirements.
>
> Just how many spreadsheets do you need to keep in memory?
>
Out of 2048 MB, 1536 MB is getting used by HSSFSheet. I am saying it after
seeing the heap dump. I am not a developer and I do have only basic
knowledge about Java.

>
> > After increasing the max heap size we were seeing some large GC
> > pauses for the same we tried to change the JVM policy to CMS and
> > added following parameters
> >
> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
> > -XX:+CMSParallelRemarkEnabled
>
> Did you enable verbose GC logging before/after you enabled those
> options? Did it help anything? Do you have any idea *why* your GCs
> were taking so long, or did you just Google for "java gc is taking a
> long time" and enable those options because they were "recommended" by
> someone?
>
I have enabled GC logging before doing any changes. After observing the GC
i have changed the GC policy to CMS as it is best for throughput. Also
after changing it to CMS pauses has been reduced.

>
> > Since then the long pauses reduced from 112 seconds to 90 seconds.
>
> Without seeing your data, I would guess that it's only a coincidence
> that your pauses have decreased in duration: you have likely not had
> any improvement by changing the GC configuration.
>

Earlier user is complaining about Application slowness. After changing the
algorithm we have not observed any slowness issues. Earlier at the time of
slowness issue we have observed GC pauses. I am saying this because at the
time of issue there were many Minor GC call have been observed in GC logs.
Note I am not expert in this I am just saying it after seeing the data.

>
> > Also we have seeing regular permanent generation concurrent mark
> > failure which got reduced after changing NewSize to 512MB.
>
> Well, the NewSize shouldn't have any bearing on anything happening in
> PermGen, other than maybe allowing OutOfMemoryErrors to occur if you
> overfill PermGen. But that's not happening, here.
>
> > -Dsun.rmi.dgc.client.gcInterval=3600000
> > -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
>
> If I understand correctly (and I don't claim to be a GC ergonomics
> expert), those options are mutually-exclusive. Disabling explicit GC
> should disable the RMI's use of .. explicit garbage-collection. So, if
> you really are using RMI, disabling explicit garbage-collection can
> ruin everything. [See
>
> http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#1.1.Other%20Considerations|outline
> ].
>
> Have you tried with a supported version of a Java VM?
>

Will check the same and update accordingly.



>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJRbalsAAoJEBzwKT+lPKRY1/kP/isK8UjCkC3PB1WCgSp9aDNI
> iOomoVHiXlL/0m4YTp/mxveB+8rs5edxrJPSeWJkpJAEs/4MU3MXVflapC2a90ij
> FhcXd0QvR1Cf0KE97wWiwXWRTtn60wOErutvOqZ9/HnaVwWRBZhczlJ+ZEwQ9ms1
> vcFQMFsoOVFpFQT1rh4T+1ruRO+qorT1ybl7gkIPXNb4mkdgqrxkCiSwI2eB0w/p
> b2Ig1ugx9wNB9petyfhVpOffl7jbl/18KdJXj5N1hKQ2tAfzOCSf6nTNeFluG+zd
> J9/Wa6nOePgGf8+OzeIbvHS96u4SBOYt3NR1d/Vz1eIk1dvAxkp5aiBTvtv3l+Js
> /TogUoHSjXILfH+zzutvoucHCFcAOtDD4O658z/BcfROnRBpz6TYoEhtGob3d+Zp
> TFDM3N3WUt+566pKwNQtJfrOGJjq5IM7iBZKeofDiZGmJ1FiL89gdCWuHNpzvCX8
> sV8xuwBVEWXQwz+VuLW2FB9PaTSOUBqOMBjbt3sjjuY7Uw6lMEszDhXP7nAhlYEj
> EsF9uXoWZBWMJZF+1p9KhjD3qNBhTKgB21TSzq59Mjw7FvhZE5pKDOqhYP/uyxi8
> nigqrjKJbKATBztpOiR8bjvb+LSJy0hvf6bNhvAZY6S4qRU6K2kWkFfPz3v6QIiz
> GmxHvSCPXNHhx+X7S+3t
> =YmSW
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Regards,
Vidyadhar

Re: ParNew promotion failed in verbose GC logs

Posted by Pïd stèr <pi...@pidster.com>.
On 16 Apr 2013, at 20:42, Christopher Schultz
<ch...@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Vidyadhar,
>
> On 4/16/13 1:14 PM, Techienote com wrote:
>> With default setting we were getting frequent OOM errors. After
>> analyzing the heap dump we found
>> org.apache.poi.hssf.usermodel.HSSFSheet is accumulating more heap
>> memory. As per the application development this is the normal
>> behavior and have suggested to increase the maximum heap size to
>> 2048MB
>
> So, you keep lots of spreadsheets in memory for some reason? I can't
> imagine that loading a Microsoft Excel document into memory and
> keeping it in what POI calls "horrible spreadsheet format" is the best
> way to keep that information around. I suppose only /you/ know you
> requirements.
>
> Just how many spreadsheets do you need to keep in memory?
>
>> After increasing the max heap size we were seeing some large GC
>> pauses for the same we tried to change the JVM policy to CMS and
>> added following parameters
>>
>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>> -XX:+CMSParallelRemarkEnabled
>
> Did you enable verbose GC logging before/after you enabled those
> options? Did it help anything? Do you have any idea *why* your GCs
> were taking so long, or did you just Google for "java gc is taking a
> long time" and enable those options because they were "recommended" by
> someone?
>
>> Since then the long pauses reduced from 112 seconds to 90 seconds.
>
> Without seeing your data, I would guess that it's only a coincidence
> that your pauses have decreased in duration: you have likely not had
> any improvement by changing the GC configuration.
>
>> Also we have seeing regular permanent generation concurrent mark
>> failure which got reduced after changing NewSize to 512MB.
>
> Well, the NewSize shouldn't have any bearing on anything happening in
> PermGen, other than maybe allowing OutOfMemoryErrors to occur if you
> overfill PermGen. But that's not happening, here.

+1

Did CMS in 1.5 actually collect in PermGen? (Genuinely don't know the
answer to that)


p



>
>> -Dsun.rmi.dgc.client.gcInterval=3600000
>> -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
>
> If I understand correctly (and I don't claim to be a GC ergonomics
> expert), those options are mutually-exclusive. Disabling explicit GC
> should disable the RMI's use of .. explicit garbage-collection. So, if
> you really are using RMI, disabling explicit garbage-collection can
> ruin everything. [See
> http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#1.1.Other%20Considerations|outline].
>
> Have you tried with a supported version of a Java VM?
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIcBAEBCAAGBQJRbalsAAoJEBzwKT+lPKRY1/kP/isK8UjCkC3PB1WCgSp9aDNI
> iOomoVHiXlL/0m4YTp/mxveB+8rs5edxrJPSeWJkpJAEs/4MU3MXVflapC2a90ij
> FhcXd0QvR1Cf0KE97wWiwXWRTtn60wOErutvOqZ9/HnaVwWRBZhczlJ+ZEwQ9ms1
> vcFQMFsoOVFpFQT1rh4T+1ruRO+qorT1ybl7gkIPXNb4mkdgqrxkCiSwI2eB0w/p
> b2Ig1ugx9wNB9petyfhVpOffl7jbl/18KdJXj5N1hKQ2tAfzOCSf6nTNeFluG+zd
> J9/Wa6nOePgGf8+OzeIbvHS96u4SBOYt3NR1d/Vz1eIk1dvAxkp5aiBTvtv3l+Js
> /TogUoHSjXILfH+zzutvoucHCFcAOtDD4O658z/BcfROnRBpz6TYoEhtGob3d+Zp
> TFDM3N3WUt+566pKwNQtJfrOGJjq5IM7iBZKeofDiZGmJ1FiL89gdCWuHNpzvCX8
> sV8xuwBVEWXQwz+VuLW2FB9PaTSOUBqOMBjbt3sjjuY7Uw6lMEszDhXP7nAhlYEj
> EsF9uXoWZBWMJZF+1p9KhjD3qNBhTKgB21TSzq59Mjw7FvhZE5pKDOqhYP/uyxi8
> nigqrjKJbKATBztpOiR8bjvb+LSJy0hvf6bNhvAZY6S4qRU6K2kWkFfPz3v6QIiz
> GmxHvSCPXNHhx+X7S+3t
> =YmSW
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Vidyadhar,

On 4/16/13 1:14 PM, Techienote com wrote:
> With default setting we were getting frequent OOM errors. After
> analyzing the heap dump we found
> org.apache.poi.hssf.usermodel.HSSFSheet is accumulating more heap
> memory. As per the application development this is the normal
> behavior and have suggested to increase the maximum heap size to 
> 2048MB

So, you keep lots of spreadsheets in memory for some reason? I can't
imagine that loading a Microsoft Excel document into memory and
keeping it in what POI calls "horrible spreadsheet format" is the best
way to keep that information around. I suppose only /you/ know you
requirements.

Just how many spreadsheets do you need to keep in memory?

> After increasing the max heap size we were seeing some large GC
> pauses for the same we tried to change the JVM policy to CMS and
> added following parameters
> 
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
> -XX:+CMSParallelRemarkEnabled

Did you enable verbose GC logging before/after you enabled those
options? Did it help anything? Do you have any idea *why* your GCs
were taking so long, or did you just Google for "java gc is taking a
long time" and enable those options because they were "recommended" by
someone?

> Since then the long pauses reduced from 112 seconds to 90 seconds.

Without seeing your data, I would guess that it's only a coincidence
that your pauses have decreased in duration: you have likely not had
any improvement by changing the GC configuration.

> Also we have seeing regular permanent generation concurrent mark 
> failure which got reduced after changing NewSize to 512MB.

Well, the NewSize shouldn't have any bearing on anything happening in
PermGen, other than maybe allowing OutOfMemoryErrors to occur if you
overfill PermGen. But that's not happening, here.

> -Dsun.rmi.dgc.client.gcInterval=3600000 
> -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC

If I understand correctly (and I don't claim to be a GC ergonomics
expert), those options are mutually-exclusive. Disabling explicit GC
should disable the RMI's use of .. explicit garbage-collection. So, if
you really are using RMI, disabling explicit garbage-collection can
ruin everything. [See
http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#1.1.Other%20Considerations|outline].

Have you tried with a supported version of a Java VM?

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJRbalsAAoJEBzwKT+lPKRY1/kP/isK8UjCkC3PB1WCgSp9aDNI
iOomoVHiXlL/0m4YTp/mxveB+8rs5edxrJPSeWJkpJAEs/4MU3MXVflapC2a90ij
FhcXd0QvR1Cf0KE97wWiwXWRTtn60wOErutvOqZ9/HnaVwWRBZhczlJ+ZEwQ9ms1
vcFQMFsoOVFpFQT1rh4T+1ruRO+qorT1ybl7gkIPXNb4mkdgqrxkCiSwI2eB0w/p
b2Ig1ugx9wNB9petyfhVpOffl7jbl/18KdJXj5N1hKQ2tAfzOCSf6nTNeFluG+zd
J9/Wa6nOePgGf8+OzeIbvHS96u4SBOYt3NR1d/Vz1eIk1dvAxkp5aiBTvtv3l+Js
/TogUoHSjXILfH+zzutvoucHCFcAOtDD4O658z/BcfROnRBpz6TYoEhtGob3d+Zp
TFDM3N3WUt+566pKwNQtJfrOGJjq5IM7iBZKeofDiZGmJ1FiL89gdCWuHNpzvCX8
sV8xuwBVEWXQwz+VuLW2FB9PaTSOUBqOMBjbt3sjjuY7Uw6lMEszDhXP7nAhlYEj
EsF9uXoWZBWMJZF+1p9KhjD3qNBhTKgB21TSzq59Mjw7FvhZE5pKDOqhYP/uyxi8
nigqrjKJbKATBztpOiR8bjvb+LSJy0hvf6bNhvAZY6S4qRU6K2kWkFfPz3v6QIiz
GmxHvSCPXNHhx+X7S+3t
=YmSW
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Techienote com <te...@gmail.com>.
On Tue, Apr 16, 2013 at 3:00 PM, André Warnier <aw...@ice-sa.com> wrote:

>  Techienote com wrote:
>
>> On Tue, Apr 16, 2013 at 12:33 PM, Pïd stèr <pi...@pidster.com> wrote:
>>
>> On 16 Apr 2013, at 06:35, Techienote com <te...@gmail.com>
>>> wrote:
>>>
>>> Hi Team,
>>>>
>>>> Recently I am seeing concurrent mode failure errors in my Verbose GC
>>>>
>>> logs.
>>>
>>>> For the same I have set NewSize to 512MB, still I am seeing concurrent
>>>>
>>> mode
>>>
>>>> failure in the Verbose GC logs
>>>>
>>> Where have you set these JVM attributes and how are you starting Tomcat?
>>>
>>> Is the log below from after you set NewSize to 512 or before? It
>>> doesn't look like it is set to me.
>>>
>>> After changing the NewSize to 512. Can you please confirm how you come to
>> know that the size is not set.
>>
>> What does your app do?
>>>
>>> This is document uploading and authorizing applicaton.
>>
>>
>> When did this start happening, was it after a specific app update?
>> I am seeing this after changing the default policy to CMS
>>
>> Have you observed load increasing due to user activity?
>>>
>>> Concurrent # of users are 12
>>
>> Regards,
>> Vidyadhar
>>
>>
>>> p
>>>
>>>
>>
>>
>>
>>> 62230.611: [ParNew (promotion failed)
>>>> Desired survivor size 32768 bytes, new threshold 0 (max 0)
>>>> : 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent
>>>> mode
>>>> failure)
>>>>
>>>> Also Garbage collection is causing some large pauses. The largest pause
>>>>
>>> was
>>>
>>>> 121239 ms.
>>>>
>>>> : 1255376K->215461K(2068480K), 121.1880176 secs]
>>>> 1283830K->215461K(2097088K)**Heap after gc invocations=12320:
>>>> par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
>>>> 0x6a400000)
>>>>  eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
>>>>  from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
>>>>  to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
>>>> concurrent mark-sweep generation total 2068480K, used 215461K
>>>>
>>> [0x6a400000,
>>>
>>>> 0xe8800000, 0xe8800000)
>>>> concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
>>>> 0xede6c000, 0xf8800000)
>>>> }
>>>> , 121.2390524 secs]
>>>>
>>>> Following is my JVM argument
>>>> -server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
>>>> -XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
>>>> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
>>>> -Dsun.rmi.dgc.client.**gcInterval=3600000
>>>> -Dsun.rmi.dgc.server.**gcInterval=3600000 -XX:+DisableExplicitGC
>>>> -XX:+**HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
>>>> -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
>>>> -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
>>>>
>>>> Tomcat Version
>>>> 6.0.36
>>>>
>>>> JDK Version
>>>> Sun HotSpot 1.5.0.22
>>>>
>>>> CPU
>>>> Number of Physical processor 1
>>>> Number of Virtual processor 7
>>>>
>>>> RAM
>>>> 6144MB
>>>>
>>>> OS
>>>> SunOS 5.10 Generic_147440-09 sun4v sparc sun4v
>>>>
>>>> Do you have any idea how to tune it further?
>>>>
>>>>
> I think that the question should be asked : with an application that has
> only 12 concurrent users, is there any particular reason why you *need* to
> specifiy all these parameters ?
> By default, without any of these memory and GC-related parameters, the JVM
> will use default values which are pre-tuned by real specialists to be
> *reasonable* for most applications.
> I run about 20 Tomcat servers, with a variety of applications and loads.
>  The only non-default parameters I ever specify, other than for specific
> debugging, are "-Xms" and "-Xmx", and these servers are doing fine as far
> as I can tell.
> By specifying all these parameters, is it possible that you are in fact
> doing the very opposite from "tuning", and that you are completely
> "de-tuning" the JVM ?
> If you just remove them all, and specify only
> "-Xmx2048m -Xms2048m", what happens ?
>


>
> "Premature optimization is the root of all evil"
> http://en.wikiquote.org/wiki/**Donald_Knuth<http://en.wikiquote.org/wiki/Donald_Knuth>
>
>
>
 Andre,



With default setting we were getting frequent OOM errors. After analyzing
the heap dump we found org.apache.poi.hssf.usermodel.HSSFSheet is
accumulating more heap memory. As per the application development this is
the normal behavior and have suggested to increase the maximum heap size to
2048MB



After increasing the max heap size we were seeing some large GC pauses for
the same we tried to change the JVM policy to CMS and added following
parameters

-XX:+UseConcMarkSweepGC

-XX:+UseParNewGC

-XX:+CMSParallelRemarkEnabled



Since then the long pauses reduced from 112 seconds to 90 seconds. Also we
have seeing regular permanent generation concurrent mark failure which got
reduced after changing NewSize to 512MB.



Pid,



Sorry I uploaded the wrong logs.



Currently I am seeing the following lines in the VerboseGC logs



22794.160: [Full GC {Heap before gc invocations=127:

 par new generation   total 523840K, used 105973K [0x68800000, 0x88800000,
0x88800000)

  eden space 523392K,  20% used [0x68800000, 0x6ef7d7f0, 0x88720000)

  from space 448K,   0% used [0x88790000, 0x88790000, 0x88800000)

  to   space 448K,   0% used [0x88720000, 0x88720000, 0x88790000)

 concurrent mark-sweep generation total 1572864K, used 680757K [0x88800000,
0xe8800000, 0xe8800000)

 concurrent-mark-sweep perm gen total 48768K, used 48585K [0xe8800000,
0xeb7a0000, 0xf8800000)

22794.161: [CMS (concurrent mode failure)[Unloading class
sun.reflect.GeneratedSerializationConstructorAccessor66]

.........

.........

: 680757K->241137K(1572864K), 21.0734562 secs] 786731K->241137K(2096704K),
[CMS Perm : 48585K->47991K(48768K)]Heap after gc invocations=128:

 par new generation   total 523840K, used 0K [0x68800000, 0x88800000,
0x88800000)

  eden space 523392K,   0% used [0x68800000, 0x68800000, 0x88720000)

  from space 448K,   0% used [0x88790000, 0x88790000, 0x88800000)

  to   space 448K,   0% used [0x88720000, 0x88720000, 0x88790000)

 concurrent mark-sweep generation total 1572864K, used 241137K [0x88800000,
0xe8800000, 0xe8800000)

 concurrent-mark-sweep perm gen total 79992K, used 47991K [0xe8800000,
0xed61e000, 0xf8800000)

}

, 21.0823116 secs]

Re: ParNew promotion failed in verbose GC logs

Posted by "Howard W. Smith, Jr." <sm...@gmail.com>.
On Tue, Apr 16, 2013 at 7:11 AM, David kerber <dc...@verizon.net> wrote:

> On 4/16/2013 5:30 AM, André Warnier wrote:
>
>
>  "Premature optimization is the root of all evil"
>> http://en.wikiquote.org/wiki/**Donald_Knuth<http://en.wikiquote.org/wiki/Donald_Knuth>
>>
>
> No doubt; I learned that one long ago.  Get it working correctly first,
> and only then start trying to optimize pieces that aren't working well
> enough.
>
>
> Wow, good catch, David (and thank you for the LOL)! I'm learning, and at
least the last 6 months, have been doing my best, trying 'not' to optimize,
prematurely! Still, working on that one, too. :)

Re: ParNew promotion failed in verbose GC logs

Posted by David kerber <dc...@verizon.net>.
On 4/16/2013 5:30 AM, André Warnier wrote:

...

> "Premature optimization is the root of all evil"
> http://en.wikiquote.org/wiki/Donald_Knuth

No doubt; I learned that one long ago.  Get it working correctly first, 
and only then start trying to optimize pieces that aren't working well 
enough.




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by André Warnier <aw...@ice-sa.com>.
Techienote com wrote:
> On Tue, Apr 16, 2013 at 12:33 PM, Pïd stèr <pi...@pidster.com> wrote:
> 
>> On 16 Apr 2013, at 06:35, Techienote com <te...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> Recently I am seeing concurrent mode failure errors in my Verbose GC
>> logs.
>>> For the same I have set NewSize to 512MB, still I am seeing concurrent
>> mode
>>> failure in the Verbose GC logs
>> Where have you set these JVM attributes and how are you starting Tomcat?
>>
>> Is the log below from after you set NewSize to 512 or before? It
>> doesn't look like it is set to me.
>>
> After changing the NewSize to 512. Can you please confirm how you come to
> know that the size is not set.
> 
>> What does your app do?
>>
> This is document uploading and authorizing applicaton.
> 
> 
> When did this start happening, was it after a specific app update?
> I am seeing this after changing the default policy to CMS
> 
>> Have you observed load increasing due to user activity?
>>
> Concurrent # of users are 12
> 
> Regards,
> Vidyadhar
> 
>>
>> p
>>
> 
> 
> 
>>
>>> 62230.611: [ParNew (promotion failed)
>>> Desired survivor size 32768 bytes, new threshold 0 (max 0)
>>> : 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent mode
>>> failure)
>>>
>>> Also Garbage collection is causing some large pauses. The largest pause
>> was
>>> 121239 ms.
>>>
>>> : 1255376K->215461K(2068480K), 121.1880176 secs]
>>> 1283830K->215461K(2097088K)Heap after gc invocations=12320:
>>> par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
>>> 0x6a400000)
>>>  eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
>>>  from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
>>>  to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
>>> concurrent mark-sweep generation total 2068480K, used 215461K
>> [0x6a400000,
>>> 0xe8800000, 0xe8800000)
>>> concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
>>> 0xede6c000, 0xf8800000)
>>> }
>>> , 121.2390524 secs]
>>>
>>> Following is my JVM argument
>>> -server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
>>> -XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
>>> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
>>> -Dsun.rmi.dgc.client.gcInterval=3600000
>>> -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
>>> -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
>>> -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
>>> -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
>>>
>>> Tomcat Version
>>> 6.0.36
>>>
>>> JDK Version
>>> Sun HotSpot 1.5.0.22
>>>
>>> CPU
>>> Number of Physical processor 1
>>> Number of Virtual processor 7
>>>
>>> RAM
>>> 6144MB
>>>
>>> OS
>>> SunOS 5.10 Generic_147440-09 sun4v sparc sun4v
>>>
>>> Do you have any idea how to tune it further?
>>>

I think that the question should be asked : with an application that has only 12 
concurrent users, is there any particular reason why you *need* to specifiy all these 
parameters ?
By default, without any of these memory and GC-related parameters, the JVM will use 
default values which are pre-tuned by real specialists to be *reasonable* for most 
applications.
I run about 20 Tomcat servers, with a variety of applications and loads.  The only 
non-default parameters I ever specify, other than for specific debugging, are "-Xms" and 
"-Xmx", and these servers are doing fine as far as I can tell.
By specifying all these parameters, is it possible that you are in fact doing the very 
opposite from "tuning", and that you are completely "de-tuning" the JVM ?
If you just remove them all, and specify only
"-Xmx2048m -Xms2048m", what happens ?

"Premature optimization is the root of all evil"
http://en.wikiquote.org/wiki/Donald_Knuth

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: ParNew promotion failed in verbose GC logs

Posted by Techienote com <te...@gmail.com>.
On Tue, Apr 16, 2013 at 12:33 PM, Pïd stèr <pi...@pidster.com> wrote:

> On 16 Apr 2013, at 06:35, Techienote com <te...@gmail.com> wrote:
>
> > Hi Team,
> >
> > Recently I am seeing concurrent mode failure errors in my Verbose GC
> logs.
> > For the same I have set NewSize to 512MB, still I am seeing concurrent
> mode
> > failure in the Verbose GC logs
>
> Where have you set these JVM attributes and how are you starting Tomcat?
>
> Is the log below from after you set NewSize to 512 or before? It
> doesn't look like it is set to me.
>
After changing the NewSize to 512. Can you please confirm how you come to
know that the size is not set.

>
> What does your app do?
>
This is document uploading and authorizing applicaton.

>

When did this start happening, was it after a specific app update?
>
I am seeing this after changing the default policy to CMS

>
> Have you observed load increasing due to user activity?
>
Concurrent # of users are 12

Regards,
Vidyadhar

>
>
> p
>



>
>
> > 62230.611: [ParNew (promotion failed)
> > Desired survivor size 32768 bytes, new threshold 0 (max 0)
> > : 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent mode
> > failure)
> >
> > Also Garbage collection is causing some large pauses. The largest pause
> was
> > 121239 ms.
> >
> > : 1255376K->215461K(2068480K), 121.1880176 secs]
> > 1283830K->215461K(2097088K)Heap after gc invocations=12320:
> > par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
> > 0x6a400000)
> >  eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
> >  from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
> >  to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
> > concurrent mark-sweep generation total 2068480K, used 215461K
> [0x6a400000,
> > 0xe8800000, 0xe8800000)
> > concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
> > 0xede6c000, 0xf8800000)
> > }
> > , 121.2390524 secs]
> >
> > Following is my JVM argument
> > -server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
> > -XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
> > -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
> > -Dsun.rmi.dgc.client.gcInterval=3600000
> > -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
> > -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
> > -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
> > -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
> >
> > Tomcat Version
> > 6.0.36
> >
> > JDK Version
> > Sun HotSpot 1.5.0.22
> >
> > CPU
> > Number of Physical processor 1
> > Number of Virtual processor 7
> >
> > RAM
> > 6144MB
> >
> > OS
> > SunOS 5.10 Generic_147440-09 sun4v sparc sun4v
> >
> > Do you have any idea how to tune it further?
> >
> >
> >
> > Regards,
> >
> > Vidyadhar
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: ParNew promotion failed in verbose GC logs

Posted by Pïd stèr <pi...@pidster.com>.
On 16 Apr 2013, at 06:35, Techienote com <te...@gmail.com> wrote:

> Hi Team,
>
> Recently I am seeing concurrent mode failure errors in my Verbose GC logs.
> For the same I have set NewSize to 512MB, still I am seeing concurrent mode
> failure in the Verbose GC logs

Where have you set these JVM attributes and how are you starting Tomcat?

Is the log below from after you set NewSize to 512 or before? It
doesn't look like it is set to me.

What does your app do?

When did this start happening, was it after a specific app update?

Have you observed load increasing due to user activity?


p


> 62230.611: [ParNew (promotion failed)
> Desired survivor size 32768 bytes, new threshold 0 (max 0)
> : 28481K->28481K(28608K), 0.0483728 secs]62230.659: [CMS (concurrent mode
> failure)
>
> Also Garbage collection is causing some large pauses. The largest pause was
> 121239 ms.
>
> : 1255376K->215461K(2068480K), 121.1880176 secs]
> 1283830K->215461K(2097088K)Heap after gc invocations=12320:
> par new generation   total 28608K, used 0K [0x68800000, 0x6a400000,
> 0x6a400000)
>  eden space 28544K,   0% used [0x68800000, 0x68800000, 0x6a3e0000)
>  from space 64K,   0% used [0x6a3e0000, 0x6a3e0000, 0x6a3f0000)
>  to   space 64K,   0% used [0x6a3f0000, 0x6a3f0000, 0x6a400000)
> concurrent mark-sweep generation total 2068480K, used 215461K [0x6a400000,
> 0xe8800000, 0xe8800000)
> concurrent-mark-sweep perm gen total 88496K, used 55091K [0xe8800000,
> 0xede6c000, 0xf8800000)
> }
> , 121.2390524 secs]
>
> Following is my JVM argument
> -server -verbose:gc -Xmx2048m -Xms2048m -XX:NewSize=512m
> -XX:MaxNewSize=512m -XX:MaxPermSize=256M -XX:+UseConcMarkSweepGC
> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled
> -Dsun.rmi.dgc.client.gcInterval=3600000
> -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:+DisableExplicitGC
> -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnCtrlBreak
> -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintClassHistogram
> -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
>
> Tomcat Version
> 6.0.36
>
> JDK Version
> Sun HotSpot 1.5.0.22
>
> CPU
> Number of Physical processor 1
> Number of Virtual processor 7
>
> RAM
> 6144MB
>
> OS
> SunOS 5.10 Generic_147440-09 sun4v sparc sun4v
>
> Do you have any idea how to tune it further?
>
>
>
> Regards,
>
> Vidyadhar

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org