You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Wangda Tan (JIRA)" <ji...@apache.org> on 2016/03/21 07:19:25 UTC

[jira] [Created] (YARN-4844) Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64

Wangda Tan created YARN-4844:
--------------------------------

             Summary: Upgrade fields of o.a.h.y.api.records.Resource from int32 to int64
                 Key: YARN-4844
                 URL: https://issues.apache.org/jira/browse/YARN-4844
             Project: Hadoop YARN
          Issue Type: Sub-task
          Components: api
            Reporter: Wangda Tan
            Priority: Critical


We use int32 for memory now, if a cluster has 10k nodes, each node has 210G memory, we will get a negative total cluster memory.

And another case that easier overflows int32 is: we added all pending resources of running apps to cluster's total pending resources. If a problematic app requires too much resources (let's say 1M+ containers, each of them has 3G containers), int32 will be not enough.

Even if we can cap each app's pending request, we cannot handle the case that there're many running apps, each of them has capped but still significant numbers of pending resources.

So we may possibly need to upgrade int32 memory field (could include v-cores as well) to int64 to avoid integer overflow. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)