You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Manikandan R (Jira)" <ji...@apache.org> on 2019/08/20 17:34:00 UTC

[jira] [Commented] (YARN-9767) PartitionQueueMetrics Issues

    [ https://issues.apache.org/jira/browse/YARN-9767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911575#comment-16911575 ] 

Manikandan R commented on YARN-9767:
------------------------------------

On #1 observation,

After container allocation, pending resources gets deducted inside {{QueueeMetrics#allocateResources}} using the Node Partition as opposed to requested partition info. I think RMContainerImpl#getNodeLabelExpression can be used to decreasing pending resources as it is more appropriate because of following reasons:

1. {{RMContainerImpl#getNodeLabelExpression}} is derived from {{AppPlacementAllocator#getPrimaryRequestedNodePartition}}. Java doc of {{AppPlacementAllocator#getPrimaryRequestedNodePartition}} is good enough to explain this.
 2. In this case, actual intent is to run on ANY where (which is nothing but the "default" partition) but ended up in using some non exclusive partition. So increasing pending resources on "default" partition or PrimaryRequestedNodePartition (mostly "default" or any specific partition) and deducting the pending resources in the same way seems to be correct one rather than increasing and decreasing in two different places.

So fix would be something like

{{AppSchedulingInfo#updateMetrics}}
{code:java}
       queue.getMetrics().allocateResources(node.getPartition(), user, 1,
          containerAllocated.getContainer().getResource(), false);
      queue.getMetrics().decrPendingResources(
          containerAllocated.getNodeLabelExpression(), user, 1,
          containerAllocated.getContainer().getResource());
{code}
instead of
{code:java}
       queue.getMetrics().allocateResources(node.getPartition(), user, 1,
          containerAllocated.getContainer().getResource(), true);
{code}

Please share your thoughts.

> PartitionQueueMetrics Issues
> ----------------------------
>
>                 Key: YARN-9767
>                 URL: https://issues.apache.org/jira/browse/YARN-9767
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Manikandan R
>            Assignee: Manikandan R
>            Priority: Major
>
> The intent of the Jira is to capture the issues/observations encountered as part of YARN-6492 development separately for ease of tracking.
> Observations:
> Please refer this 
> https://issues.apache.org/jira/browse/YARN-6492?focusedCommentId=16904027&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16904027
> 1. Since partition info are being extracted from request and node, there is a problem. For example, 
>  
> Node N has been mapped to Label X (Non exclusive). Queue A has been configured with ANY Node label. App A requested resources from Queue A and its containers ran on Node N for some reasons. During AbstractCSQueue#allocateResource call, Node partition (using SchedulerNode ) would get used for calculation. Lets say allocate call has been fired for 3 containers of 1 GB each, then
> a. PartitionDefault * queue A -> pending mb is 3 GB
> b. PartitionX * queue A -> pending mb is -3 GB
>  
> is the outcome. Because app request has been fired without any label specification and #a metrics has been derived. After allocation is over, pending resources usually gets decreased. When this happens, it use node partition info. hence #b metrics has derived. 
>  
> Given this kind of situation, We will need to put some thoughts on achieving the metrics correctly.
>  
> 2. Though the intent of this jira is to do Partition Queue Metrics, we would like to retain the existing Queue Metrics for backward compatibility (as you can see from jira's discussion).
> With this patch and YARN-9596 patch, queuemetrics (for queue's) would be overridden either with some specific partition values or default partition values. It could be vice - versa as well. For example, after the queues (say queue A) has been initialised with some min and max cap and also with node label's min and max cap, Queuemetrics (availableMB) for queue A return values based on node label's cap config.
> I've been working on these observations to provide a fix and attached .005.WIP.patch. Focus of .005.WIP.patch is to ensure availableMB, availableVcores is correct (Please refer above #2 observation). Added more asserts in{{testQueueMetricsWithLabelsOnDefaultLabelNode}} to ensure fix for #2 is working properly.
> Also one more thing to note is, user metrics for availableMB, availableVcores at root queue was not there even before. Retained the same behaviour. User metrics for availableMB, availableVcores is available only at child queue's level and also with partitions.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org