You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Yoram Arnon (JIRA)" <ji...@apache.org> on 2006/05/18 02:24:05 UTC
[jira] Created: (HADOOP-229) hadoop cp should generate a better
number of map tasks
hadoop cp should generate a better number of map tasks
-------------------------------------------------------
Key: HADOOP-229
URL: http://issues.apache.org/jira/browse/HADOOP-229
Project: Hadoop
Type: Bug
Components: fs
Reporter: Yoram Arnon
Assigned to: Milind Bhandarkar
Priority: Minor
hadoop cp currently assigns 10 files to copy per map task.
in case of a small number of large files on a large cluster (say 300 files of 30GB each on a 300 node cluster), this results in long execution times.
better would be to assign files per task such that the entire cluster is utilized: one file per map, with a cap of 10000 maps total, so as not to over burden the job tracker.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
[jira] Resolved: (HADOOP-229) hadoop cp should generate a better
number of map tasks
Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
[ http://issues.apache.org/jira/browse/HADOOP-229?page=all ]
Doug Cutting resolved HADOOP-229:
---------------------------------
Fix Version: 0.3
Resolution: Fixed
> hadoop cp should generate a better number of map tasks
> ------------------------------------------------------
>
> Key: HADOOP-229
> URL: http://issues.apache.org/jira/browse/HADOOP-229
> Project: Hadoop
> Type: Bug
> Components: fs
> Reporter: Yoram Arnon
> Assignee: Milind Bhandarkar
> Priority: Minor
> Fix For: 0.3
>
> hadoop cp currently assigns 10 files to copy per map task.
> in case of a small number of large files on a large cluster (say 300 files of 30GB each on a 300 node cluster), this results in long execution times.
> better would be to assign files per task such that the entire cluster is utilized: one file per map, with a cap of 10000 maps total, so as not to over burden the job tracker.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
[jira] Closed: (HADOOP-229) hadoop cp should generate a better
number of map tasks
Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
[ http://issues.apache.org/jira/browse/HADOOP-229?page=all ]
Doug Cutting closed HADOOP-229:
-------------------------------
> hadoop cp should generate a better number of map tasks
> ------------------------------------------------------
>
> Key: HADOOP-229
> URL: http://issues.apache.org/jira/browse/HADOOP-229
> Project: Hadoop
> Type: Bug
> Components: fs
> Reporter: Yoram Arnon
> Assignee: Milind Bhandarkar
> Priority: Minor
> Fix For: 0.3.0
>
> hadoop cp currently assigns 10 files to copy per map task.
> in case of a small number of large files on a large cluster (say 300 files of 30GB each on a 300 node cluster), this results in long execution times.
> better would be to assign files per task such that the entire cluster is utilized: one file per map, with a cap of 10000 maps total, so as not to over burden the job tracker.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
[jira] Commented: (HADOOP-229) hadoop cp should generate a better
number of map tasks
Posted by "Milind Bhandarkar (JIRA)" <ji...@apache.org>.
[ http://issues.apache.org/jira/browse/HADOOP-229?page=comments#action_12412420 ]
Milind Bhandarkar commented on HADOOP-229:
------------------------------------------
Number of maps is now computed as follows:
numMaps = max(1, min(numFiles, numNodes*10, totalBytes/256MB, 10000)).
Plus added reporting status for every file (or every 32MB - approx 10 seconds) so that tasks dont timeout while copying huge files.
This fix is part of the patch attached to hadoop-220.
> hadoop cp should generate a better number of map tasks
> ------------------------------------------------------
>
> Key: HADOOP-229
> URL: http://issues.apache.org/jira/browse/HADOOP-229
> Project: Hadoop
> Type: Bug
> Components: fs
> Reporter: Yoram Arnon
> Assignee: Milind Bhandarkar
> Priority: Minor
>
> hadoop cp currently assigns 10 files to copy per map task.
> in case of a small number of large files on a large cluster (say 300 files of 30GB each on a 300 node cluster), this results in long execution times.
> better would be to assign files per task such that the entire cluster is utilized: one file per map, with a cap of 10000 maps total, so as not to over burden the job tracker.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira