You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Arun C Murthy (JIRA)" <ji...@apache.org> on 2008/05/22 00:15:56 UTC

[jira] Updated: (HADOOP-657) Free temporary space should be modelled better

     [ https://issues.apache.org/jira/browse/HADOOP-657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-657:
---------------------------------

    Assignee: Ari Rabkin  (was: Arun C Murthy)

> Free temporary space should be modelled better
> ----------------------------------------------
>
>                 Key: HADOOP-657
>                 URL: https://issues.apache.org/jira/browse/HADOOP-657
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.7.2
>            Reporter: Owen O'Malley
>            Assignee: Ari Rabkin
>
> Currently, there is a configurable size that must be free for a task tracker to accept a new task. However, that isn't a very good model of what the task is likely to take. I'd like to propose:
> Map tasks:  totalInputSize * conf.getFloat("map.output.growth.factor", 1.0) / numMaps
> Reduce tasks: totalInputSize * 2 * conf.getFloat("map.output.growth.factor", 1.0) / numReduces
> where totalInputSize is the size of all the maps inputs for the given job.
> To start a new task, 
>   newTaskAllocation + (sum over running tasks of (1.0 - done) * allocation) >= 
>        free disk * conf.getFloat("mapred.max.scratch.allocation", 0.90);
> So in English, we will model the expected sizes of tasks and only task tasks that should leave us a 10% margin. With:
> map.output.growth.factor -- the relative size of the transient data relative to the map inputs
> mapred.max.scratch.allocation -- the maximum amount of our disk we want to allocate to tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.