You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Koji Noguchi (JIRA)" <ji...@apache.org> on 2010/08/12 18:52:23 UTC

[jira] Commented: (MAPREDUCE-1901) Jobs should not submit the same jar files over and over again

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12897833#action_12897833 ] 

Koji Noguchi commented on MAPREDUCE-1901:
-----------------------------------------

bq. we have started testing this patch internally and this would become production in a couple of weeks.

Joydeep, is this being tested on your production?  How does the load look like?
I don't know the details, but I like the "part of the goal here is to not have to look up mtimes again and again. " part.

We certainly have applications with many small tasks having multiple libjar/distributed-caches resulting with too many getfileinfo calls to the namenode.

> Jobs should not submit the same jar files over and over again
> -------------------------------------------------------------
>
>                 Key: MAPREDUCE-1901
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1901
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>            Reporter: Joydeep Sen Sarma
>         Attachments: 1901.PATCH
>
>
> Currently each Hadoop job uploads the required resources (jars/files/archives) to a new location in HDFS. Map-reduce nodes involved in executing this job would then download these resources into local disk.
> In an environment where most of the users are using a standard set of jars and files (because they are using a framework like Hive/Pig) - the same jars keep getting uploaded and downloaded repeatedly. The overhead of this protocol (primarily in terms of end-user latency) is significant when:
> - the jobs are small (and conversantly - large in number)
> - Namenode is under load (meaning hdfs latencies are high and made worse, in part, by this protocol)
> Hadoop should provide a way for jobs in a cooperative environment to not submit the same files over and again. Identifying and caching execution resources by a content signature (md5/sha) would be a good alternative to have available.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.