You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Philipp Dallig (Jira)" <ji...@apache.org> on 2020/05/05 08:44:00 UTC

[jira] [Created] (ZEPPELIN-4799) Use spark resource configuration

Philipp Dallig created ZEPPELIN-4799:
----------------------------------------

             Summary: Use spark resource configuration
                 Key: ZEPPELIN-4799
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4799
             Project: Zeppelin
          Issue Type: Improvement
          Components: Interpreters, Kubernetes
    Affects Versions: 0.9.0
            Reporter: Philipp Dallig
            Assignee: Philipp Dallig


Spark has the possibility to define the resource usage if you run your driver within YARN or Kubernetes in cluster mode.
The following configuration values are used to request or limit resources.
 - {{spark.driver.memory}}
 - {{spark.driver.memoryOverhead}}
 - {{spark.driver.cores}}

We should use this configuration, when setting up a Zeppelin interpreter within YARN or Kubernetes.

A good resource definition is [very important in Kubernetes|https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits].

My Goal behind:
 - Create multiple Spark interpreter with different resource usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)