You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Amar Kamat (JIRA)" <ji...@apache.org> on 2008/08/08 09:36:44 UTC
[jira] Commented: (HADOOP-3925) Configuration paramater to set the
maximum number of mappers/reducers for a job
[ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12620867#action_12620867 ]
Amar Kamat commented on HADOOP-3925:
------------------------------------
Will this be the case with a (custom made) scheduler? Controlling the max number of maps in a job might be too restrictive. I think we should control the execution of these tasks via the scheduler. Something like a fair scheduler might help.
> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
> Key: HADOOP-3925
> URL: https://issues.apache.org/jira/browse/HADOOP-3925
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred
> Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.