You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Todd Lipcon <to...@cloudera.com> on 2010/02/16 06:54:13 UTC

Sun JVM 1.6.0u18

Hey all,

Just a note that you should avoid upgrading your clusters to 1.6.0u18.
We've seen a lot of segfaults or bus errors on the DN when running
with this JVM - Stack found the ame thing on one of his clusters as
well.

We've found 1.6.0u16 to be very stable.

-Todd

RE: Sun JVM 1.6.0u18

Posted by Zl...@barclayscapital.com.
1.6.0_u18 also claims to fix bug_id=5103988 which may or may not improve the performance of the transferTo code used in org.apache.hadoop.net.SocketOutputStream.

-----Original Message-----
From: Scott Carey [mailto:scott@richrelevance.com] 
Sent: Monday, March 01, 2010 6:41 PM
To: common-user@hadoop.apache.org
Subject: Re: Sun JVM 1.6.0u18


On Mar 1, 2010, at 10:46 AM, Allen Wittenauer wrote:

> 
> 
> 
> On 3/1/10 7:24 AM, "Edward Capriolo" <ed...@gmail.com> wrote:
>> u14 added support for the 64bit compressed memory pointers which 
>> seemed important due to the fact that hadoop can be memory hungry. 
>> u15 has been stable in our deployments. Not saying you should not go 
>> newer, but I would not go older then u14.
> 
> How are the compressed memory pointers working for you?  I've been 
> debating turning them on here, so real world experience would be 
> useful from those that have taken plunge.
> 

Been using it since they came out, both for Hadoop where needed and in many other applications.  Performance gains and memory reduction in most places -- sometimes rather significant (25%).  GC times significantly lower for any heap that is reference heavy.  Heaps are still a little larger than a 32 bit one, but the benefits of native 64 bit code on x86 include improved computational performance as well.  6u18 introduces some performance enhancements to the feature that we might be able to use if 6u19 fixes the other bugs.  The next Hotspot version will make it the default setting, whenever that gets integrated and tested into the JDK6 line.  6u14 and 6u18 are the last two JDK releases with updated Hotspot versions.
_______________________________________________

This e-mail may contain information that is confidential, privileged or otherwise protected from disclosure. If you are not an intended recipient of this e-mail, do not duplicate or redistribute it by any means. Please delete it and any attachments and notify the sender that you have received it in error. Unless specifically indicated, this e-mail is not an offer to buy or sell or a solicitation to buy or sell any securities, investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Barclays. Any views or opinions presented are solely those of the author and do not necessarily represent those of Barclays. This e-mail is subject to terms available at the following link: www.barcap.com/emaildisclaimer. By messaging with Barclays you consent to the foregoing.  Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP.  This email may relate to or be sent from other members of the Barclays Group.
_______________________________________________

Re: Sun JVM 1.6.0u18

Posted by Scott Carey <sc...@richrelevance.com>.
On Mar 1, 2010, at 10:46 AM, Allen Wittenauer wrote:

> 
> 
> 
> On 3/1/10 7:24 AM, "Edward Capriolo" <ed...@gmail.com> wrote:
>> u14 added support for the 64bit compressed memory pointers which
>> seemed important due to the fact that hadoop can be memory hungry. u15
>> has been stable in our deployments. Not saying you should not go
>> newer, but I would not go older then u14.
> 
> How are the compressed memory pointers working for you?  I've been debating
> turning them on here, so real world experience would be useful from those
> that have taken plunge.
> 

Been using it since they came out, both for Hadoop where needed and in many other applications.  Performance gains and memory reduction in most places -- sometimes rather significant (25%).  GC times significantly lower for any heap that is reference heavy.  Heaps are still a little larger than a 32 bit one, but the benefits of native 64 bit code on x86 include improved computational performance as well.  6u18 introduces some performance enhancements to the feature that we might be able to use if 6u19 fixes the other bugs.  The next Hotspot version will make it the default setting, whenever that gets integrated and tested into the JDK6 line.  6u14 and 6u18 are the last two JDK releases with updated Hotspot versions.

Re: Sun JVM 1.6.0u18

Posted by Colin Evans <co...@metaweb.com>.
Here's zenoss monitoring on one of our 8-core boxes for the day before  
and the day after we switched to compressed pointers.  At Wed8:00 and  
Thu8:00, we're running our automated data pipeline - exactly the same  
high-load processes each day -- but the memory load isn't comparable.

We've been very happy with the results.



Re: Sun JVM 1.6.0u18

Posted by Steve Loughran <st...@apache.org>.
Allen Wittenauer wrote:
> 
> 
> On 3/1/10 7:24 AM, "Edward Capriolo" <ed...@gmail.com> wrote:
>> u14 added support for the 64bit compressed memory pointers which
>> seemed important due to the fact that hadoop can be memory hungry. u15
>> has been stable in our deployments. Not saying you should not go
>> newer, but I would not go older then u14.
> 
> How are the compressed memory pointers working for you?  I've been debating
> turning them on here, so real world experience would be useful from those
> that have taken plunge.
> 

I used JRockit for a long time, which had compressed on for ages. Some 
Hadoop problems, filed as bugreps. One of the funniest is that JRockit 
stacks can get way bigger than JVM stacks, so some functional tests of 
mine that recursed and expected to OOM instead timed out.

On the Sun JVMS, not used them at "datacentre scale" but found memory 
savings in everything from IDEs up.

It'd be nice if there was an easy way to turn it on as default for 
everything, no faffing around with app-specific ops

Re: Sun JVM 1.6.0u18

Posted by Allen Wittenauer <aw...@linkedin.com>.


On 3/1/10 7:24 AM, "Edward Capriolo" <ed...@gmail.com> wrote:
> u14 added support for the 64bit compressed memory pointers which
> seemed important due to the fact that hadoop can be memory hungry. u15
> has been stable in our deployments. Not saying you should not go
> newer, but I would not go older then u14.

How are the compressed memory pointers working for you?  I've been debating
turning them on here, so real world experience would be useful from those
that have taken plunge.


Re: Sun JVM 1.6.0u18

Posted by Edward Capriolo <ed...@gmail.com>.
On Mon, Mar 1, 2010 at 6:37 AM, Steve Loughran <st...@apache.org> wrote:
> Todd Lipcon wrote:
>>
>> On Thu, Feb 25, 2010 at 11:09 AM, Scott Carey
>> <sc...@richrelevance.com>wrote:
>
>>
>>> I have found some notes that suggest that "-XX:-ReduceInitialCardMarks"
>>> will work around some known crash problems with 6u18, but that may be
>>> unrelated.
>>>
>>>
>> Yep, I think that is probably a likely workaround as well. For now I'm
>> recommending downgrade to our clients, rather than introducing cryptic XX
>> flags :)
>>
>
> lots of bugreps come in once you search for ReduceInitialCardMarks
>
> Looks like a feature has been turned on :
> http://bugs.sun.com/view_bug.do?bug_id=6889757
>
> and now it is in wide-beta-test
>
> http://bugs.sun.com/view_bug.do?bug_id=6888898
> http://permalink.gmane.org/gmane.comp.lang.scala/19228
>
> Looks like the root cause is a new Garbage Collector, one that is still
> settling down. The ReduceInitialCardMarks flag is tuning the GC, but it is
> the GC itself that is possibly playing up, or it is a old GC + some new
> features. Either way: trouble.
>
> -steve
>

FYI. We are still running:
[root@nyhadoopdata10 ~]# java -version
java version "1.6.0_15"
Java(TM) SE Runtime Environment (build 1.6.0_15-b03)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02, mixed mode)

u14 added support for the 64bit compressed memory pointers which
seemed important due to the fact that hadoop can be memory hungry. u15
has been stable in our deployments. Not saying you should not go
newer, but I would not go older then u14.

Re: Sun JVM 1.6.0u18

Posted by Steve Loughran <st...@apache.org>.
Todd Lipcon wrote:
> On Thu, Feb 25, 2010 at 11:09 AM, Scott Carey <sc...@richrelevance.com>wrote:

> 
>> I have found some notes that suggest that "-XX:-ReduceInitialCardMarks"
>> will work around some known crash problems with 6u18, but that may be
>> unrelated.
>>
>>
> Yep, I think that is probably a likely workaround as well. For now I'm
> recommending downgrade to our clients, rather than introducing cryptic XX
> flags :)
> 

lots of bugreps come in once you search for ReduceInitialCardMarks

Looks like a feature has been turned on :
http://bugs.sun.com/view_bug.do?bug_id=6889757

and now it is in wide-beta-test

http://bugs.sun.com/view_bug.do?bug_id=6888898
http://permalink.gmane.org/gmane.comp.lang.scala/19228

Looks like the root cause is a new Garbage Collector, one that is still 
settling down. The ReduceInitialCardMarks flag is tuning the GC, but it 
is the GC itself that is possibly playing up, or it is a old GC + some 
new features. Either way: trouble.

-steve

Re: Sun JVM 1.6.0u18

Posted by Todd Lipcon <to...@cloudera.com>.
On Thu, Feb 25, 2010 at 11:09 AM, Scott Carey <sc...@richrelevance.com>wrote:

> On Feb 15, 2010, at 9:54 PM, Todd Lipcon wrote:
>
> > Hey all,
> >
> > Just a note that you should avoid upgrading your clusters to 1.6.0u18.
> > We've seen a lot of segfaults or bus errors on the DN when running
> > with this JVM - Stack found the ame thing on one of his clusters as
> > well.
> >
>
> Have you seen this for 32bit, 64 bit, or both?  If 64 bit, was it with
> -XX:+UseCompressedOops?
>

Just 64-bit, no compressed oops. But I haven't tested other variables.


>
> Any idea if there are Sun bugs open for the crashes?
>
>
I opened one, yes. I think Stack opened a separate one. Haven't heard back.


> I have found some notes that suggest that "-XX:-ReduceInitialCardMarks"
> will work around some known crash problems with 6u18, but that may be
> unrelated.
>
>
Yep, I think that is probably a likely workaround as well. For now I'm
recommending downgrade to our clients, rather than introducing cryptic XX
flags :)



> Lastly, I assume that Java 6u17 should work the same as 6u16, since it is a
> minor patch over 6u16 where 6u18 includes a new version of Hotspot.  Can
> anyone confirm that?
>
>
>
I haven't heard anything bad about u17 either. But since we know 16 to be
very good and nothing important is new in 17, I like to recommend 16 still.

-Todd

Re: Sun JVM 1.6.0u18

Posted by Scott Carey <sc...@richrelevance.com>.
On Feb 15, 2010, at 9:54 PM, Todd Lipcon wrote:

> Hey all,
> 
> Just a note that you should avoid upgrading your clusters to 1.6.0u18.
> We've seen a lot of segfaults or bus errors on the DN when running
> with this JVM - Stack found the ame thing on one of his clusters as
> well.
> 

Have you seen this for 32bit, 64 bit, or both?  If 64 bit, was it with -XX:+UseCompressedOops?

Any idea if there are Sun bugs open for the crashes?

I have found some notes that suggest that "-XX:-ReduceInitialCardMarks" will work around some known crash problems with 6u18, but that may be unrelated.  

Lastly, I assume that Java 6u17 should work the same as 6u16, since it is a minor patch over 6u16 where 6u18 includes a new version of Hotspot.  Can anyone confirm that?


> We've found 1.6.0u16 to be very stable.
> 
> -Todd