You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by zj...@apache.org on 2020/07/31 06:16:37 UTC

[zeppelin] branch branch-0.9 updated: [minor] add spark.executor.instances to spark doc

This is an automated email from the ASF dual-hosted git repository.

zjffdu pushed a commit to branch branch-0.9
in repository https://gitbox.apache.org/repos/asf/zeppelin.git


The following commit(s) were added to refs/heads/branch-0.9 by this push:
     new f09a534  [minor] add spark.executor.instances to spark doc
f09a534 is described below

commit f09a534513ce6327f518e55de31c8f01cd2e8267
Author: Jeff Zhang <zj...@apache.org>
AuthorDate: Fri Jul 31 14:14:52 2020 +0800

    [minor] add spark.executor.instances to spark doc
---
 docs/interpreter/spark.md                                     | 5 +++++
 spark/interpreter/src/main/resources/interpreter-setting.json | 7 +++++++
 2 files changed, 12 insertions(+)

diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 5fc9305..6a023bd 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -115,6 +115,11 @@ You can also set other Spark properties which are not listed in the table. For a
     <td>Executor memory per worker instance. <br/> e.g. 512m, 32g</td>
   </tr>
   <tr>
+    <td>spark.executor.instances</td>
+    <td>2</td>
+    <td>The number of executors for static allocation</td>
+  </tr>
+  <tr>
     <td>spark.files</td>
     <td></td>
     <td>Comma-separated list of files to be placed in the working directory of each executor. Globs are allowed.</td>
diff --git a/spark/interpreter/src/main/resources/interpreter-setting.json b/spark/interpreter/src/main/resources/interpreter-setting.json
index db078c5..0c832fb 100644
--- a/spark/interpreter/src/main/resources/interpreter-setting.json
+++ b/spark/interpreter/src/main/resources/interpreter-setting.json
@@ -61,6 +61,13 @@
         "description": "Executor memory per worker instance. ex) 512m, 32g",
         "type": "string"
       },
+      "spark.executor.instances": {
+        "envName": null,
+        "propertyName": "spark.executor.instances",
+        "defaultValue": "2",
+        "description": "The number of executors for static allocation.",
+        "type": "number"
+      },
       "spark.files": {
         "envName": null,
         "propertyName": "spark.files",