You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "lohit vijayarenu (JIRA)" <ji...@apache.org> on 2008/04/18 00:15:21 UTC

[jira] Created: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Reduce redundant copy of Block object in BlocksMap.map hash map
---------------------------------------------------------------

                 Key: HADOOP-3272
                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.18.0
         Environment: All
            Reporter: lohit vijayarenu
            Assignee: lohit vijayarenu
             Fix For: 0.18.0


Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590572#action_12590572 ] 

shv edited comment on HADOOP-3272 at 4/18/08 11:58 AM:
-----------------------------------------------------------------------

The patch is fairly simple, but might have consequences.
I am all for committing it but before we do lets run some benchmarks.
I propose to run TestDFSIO on at least a 100 nodes cluster creating say 400 files with 100 small blocks each simultaneously with the patch an without.
It would be very useful to post jmap -histo outputs for Block and BlockInfo objects before and after the patch is applied.
As well as some estimates of what the average gain per block is going to be.

      was (Author: shv):
    The patch is fairly simple, but might have consequences.
I am all for committing it but before we do lets run some benchmarks.
I propose to run TestDFSIO on at least a 100 nodes cluster creating say 400 files with 100 small blocks each simultaneously with the patch an without.
It would be very useful to post jmap -histo outputs for Block and BlockInfo objects before and after the patch is applied.
  
> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

lohit vijayarenu updated HADOOP-3272:
-------------------------------------

    Status: Patch Available  (was: Open)

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Konstantin Shvachko updated HADOOP-3272:
----------------------------------------

      Resolution: Fixed
    Hadoop Flags: [Reviewed]
          Status: Resolved  (was: Patch Available)

I just committed this. Thank you Lohit.

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

lohit vijayarenu updated HADOOP-3272:
-------------------------------------

    Status: Open  (was: Patch Available)

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590757#action_12590757 ] 

Hadoop QA commented on HADOOP-3272:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12380461/HADOOP-3272.patch
against trunk revision 645773.

    @author +1.  The patch does not contain any @author tags.

    tests included -1.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler warnings.

    release audit +1.  The applied patch does not generate any new release audit warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests -1.  The patch failed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2279/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2279/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2279/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2279/console

This message is automatically generated.

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

lohit vijayarenu updated HADOOP-3272:
-------------------------------------

    Attachment: HADOOP-3272.patch

Small fix. I saw jmap -histo output and we do not have Block objects comparable to that of BlockInfo, its much much less. 

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Konstantin Shvachko (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590572#action_12590572 ] 

Konstantin Shvachko commented on HADOOP-3272:
---------------------------------------------

The patch is fairly simple, but might have consequences.
I am all for committing it but before we do lets run some benchmarks.
I propose to run TestDFSIO on at least a 100 nodes cluster creating say 400 files with 100 small blocks each simultaneously with the patch an without.
It would be very useful to post jmap -histo outputs for Block and BlockInfo objects before and after the patch is applied.

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590648#action_12590648 ] 

lohit vijayarenu commented on HADOOP-3272:
------------------------------------------

Thanks Koji. Pasting one more jmap dump

Before patch
{noformat}
[lohit@ ~]$jmap -histo:live 2265 | grep Block
 11:      9343      298976  org.apache.hadoop.dfs.BlocksMap$BlockInfo
 17:      9342      224208  org.apache.hadoop.dfs.Block
 27:      1130       50512  [Lorg.apache.hadoop.dfs.BlocksMap$BlockInfo;
551:         4          64  org.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus
595:         2          48  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingBlockInfo
599:         1          48  org.apache.hadoop.dfs.PendingReplicationBlocks
752:         1          32  [Lorg.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus;
930:         1          16  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor
974:         1          16  org.apache.hadoop.dfs.BlocksMap
989:         1          16  org.apache.hadoop.dfs.UnderReplicatedBlocks
1003:         1           8  org.apache.hadoop.dfs.Block$1
1025:         1           8  org.apache.hadoop.dfs.LocatedBlocks$2
1083:         1           8  org.apache.hadoop.dfs.LocatedBlock$1
1105:         1           8  org.apache.hadoop.dfs.BlockCommand$1
{noformat}

After patch
{noformat}
[lohit@ ~]$jmap -histo:live 19406 | grep Block 
 11:      9224      295168  org.apache.hadoop.dfs.BlocksMap$BlockInfo
 22:       992       47552  [Lorg.apache.hadoop.dfs.BlocksMap$BlockInfo;
505:         4          64  org.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus
555:         1          48  org.apache.hadoop.dfs.PendingReplicationBlocks
594:         1          40  java.util.concurrent.LinkedBlockingQueue
691:         1          32  [Lorg.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus;
821:         1          16  org.apache.hadoop.dfs.BlocksMap
828:         1          16  java.util.concurrent.LinkedBlockingQueue$Node
832:         1          16  org.apache.hadoop.dfs.UnderReplicatedBlocks
854:         1          16  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor
918:         1           8  org.apache.hadoop.dfs.BlockCommand$1
997:         1           8  org.apache.hadoop.dfs.LocatedBlock$1
1005:         1           8  org.apache.hadoop.dfs.Block$1
1006:         1           8  org.apache.hadoop.dfs.LocatedBlocks$2
{noformat}

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590642#action_12590642 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-3272:
------------------------------------------------

+1 patch looks good.

In this case we don't really need to store BlockInfo reference twice (as key and value).  If we implement our hash table, we can reduce one reference per entry.

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590582#action_12590582 ] 

lohit vijayarenu commented on HADOOP-3272:
------------------------------------------

Here is the jmap output of namenode without the patch
{noformat}
[lohit@ ~]$jmap -histo 34713 | grep Block
  8:     23347      747104  org.apache.hadoop.dfs.BlocksMap$BlockInfo
 13:     23352      560448  org.apache.hadoop.dfs.Block
 21:      1785      119120  [Lorg.apache.hadoop.dfs.BlocksMap$BlockInfo;
396:         5          80  [Lorg.apache.hadoop.dfs.Block;
399:         5          80  org.apache.hadoop.dfs.BlocksMap$NodeIterator
443:         4          64  org.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus
479:         1          48  org.apache.hadoop.dfs.PendingReplicationBlocks
489:         2          48  org.apache.hadoop.dfs.LocatedBlock
516:         1          40  java.util.concurrent.LinkedBlockingQueue
586:         1          32  [Lorg.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus;
696:         1          16  org.apache.hadoop.dfs.BlocksMap
700:         1          16  java.util.concurrent.LinkedBlockingQueue$Node
702:         1          16  org.apache.hadoop.dfs.UnderReplicatedBlocks
721:         1          16  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor
838:         1           8  org.apache.hadoop.dfs.LocatedBlock$1
853:         1           8  org.apache.hadoop.dfs.LocatedBlocks$2
862:         1           8  org.apache.hadoop.dfs.BlockCommand$1
880:         1           8  org.apache.hadoop.dfs.Block$1
{noformat}

and here is one with the patch
{noformat}
[lohit@ ~]$jmap -hist 29874 | grep Block 
 13:      9362      299584  org.apache.hadoop.dfs.BlocksMap$BlockInfo
 35:      1184       53096  [Lorg.apache.hadoop.dfs.BlocksMap$BlockInfo;
236:        27         648  org.apache.hadoop.dfs.LocatedBlock
274:        27         432  org.apache.hadoop.dfs.BlocksMap$NodeIterator
475:         3          96  org.apache.hadoop.dfs.LocatedBlocks
555:         4          64  org.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus
600:         1          48  org.apache.hadoop.dfs.PendingReplicationBlocks
638:         1          40  java.util.concurrent.LinkedBlockingQueue
732:         1          32  [Lorg.apache.hadoop.dfs.BlockCrcUpgradeObjectNamenode$UpgradeStatus;
750:         1          24  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingBlockInfo
859:         1          16  org.apache.hadoop.dfs.BlocksMap
866:         1          16  java.util.concurrent.LinkedBlockingQueue$Node
871:         1          16  org.apache.hadoop.dfs.UnderReplicatedBlocks
893:         1          16  org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor
953:         1           8  org.apache.hadoop.dfs.BlockCommand$1
1014:         1           8  org.apache.hadoop.dfs.LocatedBlock$1
1025:         1           8  org.apache.hadoop.dfs.LocatedBlocks$2
1046:         1           8  org.apache.hadoop.dfs.Block$1
{noformat}

Number of blocks in both run were different, but we can see that number of difference in Block objects.

We have  key/value (Block/BlockInfo) for each Block in the FileSystem stored in FSNameSystem.blocksMap. Using the same BlockInfo for both key/value (BlockInfo/BlockInfo) would reduce one copy of Block for each Block in FileSystem

I will run TestDFSIO as suggested by Konstantine

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591406#action_12591406 ] 

Hadoop QA commented on HADOOP-3272:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12380461/HADOOP-3272.patch
against trunk revision 645773.

    @author +1.  The patch does not contain any @author tags.

    tests included -1.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler warnings.

    release audit +1.  The applied patch does not generate any new release audit warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2292/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2292/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2292/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2292/console

This message is automatically generated.

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

lohit vijayarenu updated HADOOP-3272:
-------------------------------------

    Affects Version/s:     (was: 0.18.0)

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Koji Noguchi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590598#action_12590598 ] 

Koji Noguchi commented on HADOOP-3272:
--------------------------------------

You might want to use jmap -histo:live .
Without it, it'll also count the stale objects.  So number differs depending on when the last fullGC happened.
(-histo:live does the fullGC first to eliminate any stale objects)

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591627#action_12591627 ] 

Hudson commented on HADOOP-3272:
--------------------------------

Integrated in Hadoop-trunk #468 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/468/])

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3272) Reduce redundant copy of Block object in BlocksMap.map hash map

Posted by "lohit vijayarenu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

lohit vijayarenu updated HADOOP-3272:
-------------------------------------

    Status: Patch Available  (was: Open)

> Reduce redundant copy of Block object in BlocksMap.map hash map
> ---------------------------------------------------------------
>
>                 Key: HADOOP-3272
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3272
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: All
>            Reporter: lohit vijayarenu
>            Assignee: lohit vijayarenu
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3272.patch
>
>
> Looks like we might have redundant copy of Block object as Key for BlocksMap.map hash map. We should restore this to using same object for both Key, Value to save space. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.