You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "zhuqi (Jira)" <ji...@apache.org> on 2021/01/23 11:44:00 UTC

[jira] [Comment Edited] (YARN-10589) Improve logic of multi-node allocation

    [ https://issues.apache.org/jira/browse/YARN-10589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17270635#comment-17270635 ] 

zhuqi edited comment on YARN-10589 at 1/23/21, 11:43 AM:
---------------------------------------------------------

[~tanu.ajmera]

The logic has been improved from multi (node->partition) to one (partition).

The partition with too much nodes has been improved already.

 


was (Author: zhuqi):
[~tanu.ajmera]

The logic has been improved from multi (node->partition) to one (partition).

 

> Improve logic of multi-node allocation
> --------------------------------------
>
>                 Key: YARN-10589
>                 URL: https://issues.apache.org/jira/browse/YARN-10589
>             Project: Hadoop YARN
>          Issue Type: Task
>    Affects Versions: 3.3.0
>            Reporter: Tanu Ajmera
>            Assignee: Tanu Ajmera
>            Priority: Major
>             Fix For: 3.4.0
>
>
> {code:java}
> for (String partititon : partitions) {
>  if (current++ > start) {
>  break;
>  }
>  CandidateNodeSet<FiCaSchedulerNode> candidates =
>  cs.getCandidateNodeSet(partititon);
>  if (candidates == null) {
>  continue;
>  }
>  cs.allocateContainersToNode(candidates, false);
> }{code}
> In above logic, if we have thousands of node in one partition, we will still repeatedly access all nodes of the partition thousands of times. There is no break point where if the partition is not same for the first node, it should stop checking other nodes in that partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org