You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Patai Sangbutsarakum <si...@gmail.com> on 2012/10/11 20:33:23 UTC

Used Heap in Namenode & dfs.replication

Hi Hadoopers,

I am looking at DFS' cluster summary.

"14708427 files and directories, 16357951 blocks = 31066378 total"

>From White's book (2nd Edition) page 42. "As a rule of thumb, each
file, directory, and block takes about 150bytes".

So, 31066378 * 150 byte => 4.34 GB

The rest of the line is  Heap Size is 12.17 GB / 34.72 GB

12.17 vs. 4.34 is like 3 time bigger number. is that because of
replication of 3 ?

Thanks
Patai

I am on 0.20.2

Re: Used Heap in Namenode & dfs.replication

Posted by Patai Sangbutsarakum <si...@gmail.com>.
Thanks Harsh.

This is from webui
14591213 files and directories, 16191821 blocks = 30783034 total. Heap
Size is 9.3 GB / 34.72 GB (26%)

this is from jmx
"name": "java.lang:type=Memory",
"modelerType": "sun.management.MemoryImpl",
"Verbose": false,
"HeapMemoryUsage": {

    "committed": 24427036672,
    "init": 791179584,
    "max": 37282709504,
    "used": 21456071792

},

I'm hoping that i'm thinking if i looked at the right spot in the jmx page.
I put Xmx 40G but in jmx it said 37 ish G


Thanks

On Fri, Oct 12, 2012 at 9:03 AM, Harsh J <ha...@cloudera.com> wrote:
> Apache Hadoop 0.20.2 may not have accurate heap usage report on that
> web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
> we had to do. You may measure actual usage via either jmap -histo:live
> or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
> metrics you can consume).
>
> On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
> <si...@gmail.com> wrote:
>> Hi Hadoopers,
>>
>> I am looking at DFS' cluster summary.
>>
>> "14708427 files and directories, 16357951 blocks = 31066378 total"
>>
>> From White's book (2nd Edition) page 42. "As a rule of thumb, each
>> file, directory, and block takes about 150bytes".
>>
>> So, 31066378 * 150 byte => 4.34 GB
>>
>> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>>
>> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
>> replication of 3 ?
>>
>> Thanks
>> Patai
>>
>> I am on 0.20.2
>
>
>
> --
> Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Patai Sangbutsarakum <si...@gmail.com>.
Thanks Harsh.

This is from webui
14591213 files and directories, 16191821 blocks = 30783034 total. Heap
Size is 9.3 GB / 34.72 GB (26%)

this is from jmx
"name": "java.lang:type=Memory",
"modelerType": "sun.management.MemoryImpl",
"Verbose": false,
"HeapMemoryUsage": {

    "committed": 24427036672,
    "init": 791179584,
    "max": 37282709504,
    "used": 21456071792

},

I'm hoping that i'm thinking if i looked at the right spot in the jmx page.
I put Xmx 40G but in jmx it said 37 ish G


Thanks

On Fri, Oct 12, 2012 at 9:03 AM, Harsh J <ha...@cloudera.com> wrote:
> Apache Hadoop 0.20.2 may not have accurate heap usage report on that
> web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
> we had to do. You may measure actual usage via either jmap -histo:live
> or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
> metrics you can consume).
>
> On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
> <si...@gmail.com> wrote:
>> Hi Hadoopers,
>>
>> I am looking at DFS' cluster summary.
>>
>> "14708427 files and directories, 16357951 blocks = 31066378 total"
>>
>> From White's book (2nd Edition) page 42. "As a rule of thumb, each
>> file, directory, and block takes about 150bytes".
>>
>> So, 31066378 * 150 byte => 4.34 GB
>>
>> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>>
>> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
>> replication of 3 ?
>>
>> Thanks
>> Patai
>>
>> I am on 0.20.2
>
>
>
> --
> Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Patai Sangbutsarakum <si...@gmail.com>.
Thanks Harsh.

This is from webui
14591213 files and directories, 16191821 blocks = 30783034 total. Heap
Size is 9.3 GB / 34.72 GB (26%)

this is from jmx
"name": "java.lang:type=Memory",
"modelerType": "sun.management.MemoryImpl",
"Verbose": false,
"HeapMemoryUsage": {

    "committed": 24427036672,
    "init": 791179584,
    "max": 37282709504,
    "used": 21456071792

},

I'm hoping that i'm thinking if i looked at the right spot in the jmx page.
I put Xmx 40G but in jmx it said 37 ish G


Thanks

On Fri, Oct 12, 2012 at 9:03 AM, Harsh J <ha...@cloudera.com> wrote:
> Apache Hadoop 0.20.2 may not have accurate heap usage report on that
> web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
> we had to do. You may measure actual usage via either jmap -histo:live
> or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
> metrics you can consume).
>
> On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
> <si...@gmail.com> wrote:
>> Hi Hadoopers,
>>
>> I am looking at DFS' cluster summary.
>>
>> "14708427 files and directories, 16357951 blocks = 31066378 total"
>>
>> From White's book (2nd Edition) page 42. "As a rule of thumb, each
>> file, directory, and block takes about 150bytes".
>>
>> So, 31066378 * 150 byte => 4.34 GB
>>
>> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>>
>> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
>> replication of 3 ?
>>
>> Thanks
>> Patai
>>
>> I am on 0.20.2
>
>
>
> --
> Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Patai Sangbutsarakum <si...@gmail.com>.
Thanks Harsh.

This is from webui
14591213 files and directories, 16191821 blocks = 30783034 total. Heap
Size is 9.3 GB / 34.72 GB (26%)

this is from jmx
"name": "java.lang:type=Memory",
"modelerType": "sun.management.MemoryImpl",
"Verbose": false,
"HeapMemoryUsage": {

    "committed": 24427036672,
    "init": 791179584,
    "max": 37282709504,
    "used": 21456071792

},

I'm hoping that i'm thinking if i looked at the right spot in the jmx page.
I put Xmx 40G but in jmx it said 37 ish G


Thanks

On Fri, Oct 12, 2012 at 9:03 AM, Harsh J <ha...@cloudera.com> wrote:
> Apache Hadoop 0.20.2 may not have accurate heap usage report on that
> web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
> we had to do. You may measure actual usage via either jmap -histo:live
> or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
> metrics you can consume).
>
> On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
> <si...@gmail.com> wrote:
>> Hi Hadoopers,
>>
>> I am looking at DFS' cluster summary.
>>
>> "14708427 files and directories, 16357951 blocks = 31066378 total"
>>
>> From White's book (2nd Edition) page 42. "As a rule of thumb, each
>> file, directory, and block takes about 150bytes".
>>
>> So, 31066378 * 150 byte => 4.34 GB
>>
>> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>>
>> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
>> replication of 3 ?
>>
>> Thanks
>> Patai
>>
>> I am on 0.20.2
>
>
>
> --
> Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20.2 may not have accurate heap usage report on that
web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
we had to do. You may measure actual usage via either jmap -histo:live
or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
metrics you can consume).

On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
<si...@gmail.com> wrote:
> Hi Hadoopers,
>
> I am looking at DFS' cluster summary.
>
> "14708427 files and directories, 16357951 blocks = 31066378 total"
>
> From White's book (2nd Edition) page 42. "As a rule of thumb, each
> file, directory, and block takes about 150bytes".
>
> So, 31066378 * 150 byte => 4.34 GB
>
> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>
> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
> replication of 3 ?
>
> Thanks
> Patai
>
> I am on 0.20.2



-- 
Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20.2 may not have accurate heap usage report on that
web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
we had to do. You may measure actual usage via either jmap -histo:live
or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
metrics you can consume).

On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
<si...@gmail.com> wrote:
> Hi Hadoopers,
>
> I am looking at DFS' cluster summary.
>
> "14708427 files and directories, 16357951 blocks = 31066378 total"
>
> From White's book (2nd Edition) page 42. "As a rule of thumb, each
> file, directory, and block takes about 150bytes".
>
> So, 31066378 * 150 byte => 4.34 GB
>
> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>
> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
> replication of 3 ?
>
> Thanks
> Patai
>
> I am on 0.20.2



-- 
Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20.2 may not have accurate heap usage report on that
web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
we had to do. You may measure actual usage via either jmap -histo:live
or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
metrics you can consume).

On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
<si...@gmail.com> wrote:
> Hi Hadoopers,
>
> I am looking at DFS' cluster summary.
>
> "14708427 files and directories, 16357951 blocks = 31066378 total"
>
> From White's book (2nd Edition) page 42. "As a rule of thumb, each
> file, directory, and block takes about 150bytes".
>
> So, 31066378 * 150 byte => 4.34 GB
>
> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>
> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
> replication of 3 ?
>
> Thanks
> Patai
>
> I am on 0.20.2



-- 
Harsh J

Re: Used Heap in Namenode & dfs.replication

Posted by Harsh J <ha...@cloudera.com>.
Apache Hadoop 0.20.2 may not have accurate heap usage report on that
web UI. See https://issues.apache.org/jira/browse/HDFS-94 for the fix
we had to do. You may measure actual usage via either jmap -histo:live
or via http://NNWebUI:PORT/jmx if thats available (shows some JVM
metrics you can consume).

On Fri, Oct 12, 2012 at 12:03 AM, Patai Sangbutsarakum
<si...@gmail.com> wrote:
> Hi Hadoopers,
>
> I am looking at DFS' cluster summary.
>
> "14708427 files and directories, 16357951 blocks = 31066378 total"
>
> From White's book (2nd Edition) page 42. "As a rule of thumb, each
> file, directory, and block takes about 150bytes".
>
> So, 31066378 * 150 byte => 4.34 GB
>
> The rest of the line is  Heap Size is 12.17 GB / 34.72 GB
>
> 12.17 vs. 4.34 is like 3 time bigger number. is that because of
> replication of 3 ?
>
> Thanks
> Patai
>
> I am on 0.20.2



-- 
Harsh J