You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Vinod K V (JIRA)" <ji...@apache.org> on 2009/05/25 19:04:45 UTC

[jira] Commented: (HADOOP-5881) Simplify configuration related to task-memory-monitoring and memory-based scheduling

    [ https://issues.apache.org/jira/browse/HADOOP-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12712751#action_12712751 ] 

Vinod K V commented on HADOOP-5881:
-----------------------------------

Some of the problems that stood out and the corresponding solutions, after some discussions with Eric, Arun and Hemanth:
 - The current configuration system doesn't distinguish memory usage of maps and reduces. In general, reduces are more memory intensive than maps. Also, because of this lack of distinguishing, we use the total memory available on the TT as a shared resource across slot types. This lead to problems mentioned in HADOOP-5811. The solution is to divide memory resource between map slots and reduce slots. In presence of high memory jobs, map slots of these jobs will use memory of other map slots and don't take over memory from reduce slots. And vice versa. To reflect the differences in general usage between map slots and reduce slots, we may want to specify more map slots on a node than the number of reduce slots.

 - We now have separate configuration for specifying default values for the job's memory configuration. This is unnecessary and can better be handled by using layers of  configuration. As Arun suggests, default values can be provided by cluster admins to the users via the configuration files distributed to clients.

 - With the current configuration, we have 1) total memory available calculated on a TT, 2) a `reserved` memory on TT for system usage. Because of this mechanism, if a TT has lower memory overall, then we assign lower memory to a single slot. As per the discussions, this seems like a wrong idea. To paraphrase Eric - "A slot is a slot, is a slot. TT will just be configured with the number of slots (map & reduce). " In essential, if a TT has lower memory, the correct scheme is to decrease the number of slots and not the memory per slot. Memory allotted for slot should be more or less same across all the TTs in the cluster.

- We are distinguishing virtual memory used by processes with the physical memory. This seems necessary when considering streaming/pipes tasks. However, "in Java, once the VM hits swap, performance degrades fast, we want to configure the limits based on the physical memory on the machine (not including swap), to avoid thrashing". With this in view, there doesn't seem to any need for distinguishing vmem from physical memory w.r.t configuration. Depending on a site's requirements, the configuration items can reflect whether we want tasks to go beyond physical memory or not.

> Simplify configuration related to task-memory-monitoring and memory-based scheduling
> ------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5881
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5881
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Vinod K V
>
> The configuration we have now is pretty complicated. Besides everything else, the mechanism of not specifying parameters separately for maps and reduces leads to problems like HADOOP-5811. This JIRA should address simplifying things and at the same time solving these problems.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.