You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Nick Rettinghouse (JIRA)" <ji...@apache.org> on 2009/03/02 18:06:56 UTC

[jira] Created: (HADOOP-5377) Inefficient jobtracker history file layout

Inefficient jobtracker history file layout
------------------------------------------

                 Key: HADOOP-5377
                 URL: https://issues.apache.org/jira/browse/HADOOP-5377
             Project: Hadoop Core
          Issue Type: Bug
          Components: mapred
         Environment: This is at least a problem on 0.15.
            Reporter: Nick Rettinghouse


Storing too many files in a single directory slows things down tremendously and in this case, makes the grid just a bit more difficult to manage.  On our jobtrackers, even with a 45 day purge cycle, we see hundreds of thousands of files in logs/hadoop/history.  The following is an example:

pchdm01.ypost.re1: logs/hadoop/history - 1,176,927 files!

This is the time(1) output of the `ls | wc -l`

real    0m56.042s
user    0m28.702s
sys     0m1.794s

Note that this was the second time I ran this filecount. The first run took more than 4 minutes of real time.

===========================================

My recommended solution is that the Hadoop team store these files in the following structure:
    history/2008/08/19
    history/2008/08/20
    history/2008/08/21

Using this structure gives us 2 important things: consistently good performance and the ability to easily delete or archive old files.  

If we expect a Hadoop cluster to process hundreds of thousands of jobs per day, then we may want to break it down by
hour like this:
    history/2008/08/19/00
    history/2008/08/19/01
     ...
    history/2008/08/19/22
    history/2008/08/19/23


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-5377) Inefficient jobtracker history file layout

Posted by "Amar Kamat (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Amar Kamat resolved HADOOP-5377.
--------------------------------

    Resolution: Duplicate

Nick, HADOOP-4670 is opened to address the same.

> Inefficient jobtracker history file layout
> ------------------------------------------
>
>                 Key: HADOOP-5377
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5377
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>         Environment: This is at least a problem on 0.15.
>            Reporter: Nick Rettinghouse
>
> Storing too many files in a single directory slows things down tremendously and in this case, makes the grid just a bit more difficult to manage.  On our jobtrackers, even with a 45 day purge cycle, we see hundreds of thousands of files in logs/hadoop/history.  The following is an example:
> pchdm01.ypost.re1: logs/hadoop/history - 1,176,927 files!
> This is the time(1) output of the `ls | wc -l`
> real    0m56.042s
> user    0m28.702s
> sys     0m1.794s
> Note that this was the second time I ran this filecount. The first run took more than 4 minutes of real time.
> ===========================================
> My recommended solution is that the Hadoop team store these files in the following structure:
>     history/2008/08/19
>     history/2008/08/20
>     history/2008/08/21
> Using this structure gives us 2 important things: consistently good performance and the ability to easily delete or archive old files.  
> If we expect a Hadoop cluster to process hundreds of thousands of jobs per day, then we may want to break it down by
> hour like this:
>     history/2008/08/19/00
>     history/2008/08/19/01
>      ...
>     history/2008/08/19/22
>     history/2008/08/19/23

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.