You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2008/08/08 08:14:46 UTC

[jira] Created: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Configuration paramater to set the maximum number of mappers/reducers for a job
-------------------------------------------------------------------------------

                 Key: HADOOP-3925
                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
             Project: Hadoop Core
          Issue Type: Improvement
          Components: mapred
            Reporter: dhruba borthakur


The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Posted by "Amar Kamat (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12620867#action_12620867 ] 

Amar Kamat commented on HADOOP-3925:
------------------------------------

Will this be the case with a (custom made) scheduler? Controlling the max number of maps in a job might be too restrictive. I think we should control the execution of these tasks via the scheduler. Something like a fair scheduler might help.  

> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

dhruba borthakur reassigned HADOOP-3925:
----------------------------------------

    Assignee: dhruba borthakur

> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12620870#action_12620870 ] 

Devaraj Das commented on HADOOP-3925:
-------------------------------------

Dhruba, what are the symptoms of the DOS? Does the JT lockup & stop responding to the heartbeats/client-RPCs? Or, are you worried that a single large job would starve others?

> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Posted by "Amareshwari Sriramadasu (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Amareshwari Sriramadasu resolved HADOOP-3925.
---------------------------------------------

    Resolution: Duplicate

Fixed by HADOOP-4018

> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3925) Configuration paramater to set the maximum number of mappers/reducers for a job

Posted by "dhruba borthakur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12620983#action_12620983 ] 

dhruba borthakur commented on HADOOP-3925:
------------------------------------------

One of our users submitted a job that has a million mappers and million reducers. The JobTracker was runnign with 3GB heap. It went into 100% CPU usage (probably GC). Never came back to life even after 10 minutes. Is there a way (in the current release) to prevent this from happening?

> Configuration paramater to set the maximum number of mappers/reducers for a job
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-3925
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3925
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: dhruba borthakur
>
> The JobTracker can be prone to a denial-of-service attack if a user submits a job that has a very large number of tasks. This has happened once in our cluster. It would be nice to have a configuration setting that limits the maximum tasks that a single job can have. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.