You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by "Edouard Chevalier (JIRA)" <ji...@apache.org> on 2016/01/20 16:07:39 UTC

[jira] [Created] (IGNITE-2419) Ignite on YARN do not handle memory overhead

Edouard Chevalier created IGNITE-2419:
-----------------------------------------

             Summary: Ignite on YARN do not handle memory overhead
                 Key: IGNITE-2419
                 URL: https://issues.apache.org/jira/browse/IGNITE-2419
             Project: Ignite
          Issue Type: Bug
          Components: hadoop
         Environment: hadoop cluster with YARN
            Reporter: Edouard Chevalier
            Priority: Critical


When deploying ignite nodes with YARN, JVM are launched with a defined amount of memory (property IGNITE_MEMORY_PER_NODE transposed to the "-Xmx" jvm property) and YARN is told to provide container that would require exactly that amount of memory. But YARN monitors the memory of the overall process, not the heap: JVM can easily requires more memory than the heap (VM and/or native overheads, threads overhead, and in the case of ignite, possibly offheap data structures). If tasks require all of the heap, the process memory would be more far more than the heap memory. The YARN then would consider that node should be killed (and kills it !) and create another one. I have a scenario where tasks requires all of JVM memory and YARN is continously allocating/deallocating containers. Global task never finishes.

My proposal is to implement a property IGNITE_OVERHEADMEMORY_PER_NODE like property spark.yarn.executor.memoryOverhead in  spark (see : https://spark.apache.org/docs/latest/running-on-yarn.html#configuration ) . I can implement it and create a pull request in github.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)