You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2022/02/16 02:02:27 UTC

[dolphinscheduler-website] branch asf-site updated: Automated deployment: c544090ad4a18e07353ba347ac78a6d2f0569167

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 521a859  Automated deployment: c544090ad4a18e07353ba347ac78a6d2f0569167
521a859 is described below

commit 521a859f1021c30e44f3d91c3c32c34cf98a82b7
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Wed Feb 16 02:02:21 2022 +0000

    Automated deployment: c544090ad4a18e07353ba347ac78a6d2f0569167
---
 en-us/docs/2.0.3/user_doc/guide/task/spark.html  |  63 +++++++++++++++-------
 en-us/docs/2.0.3/user_doc/guide/task/spark.json  |   2 +-
 en-us/docs/dev/user_doc/guide/task/spark.html    |  63 +++++++++++++++-------
 en-us/docs/dev/user_doc/guide/task/spark.json    |   2 +-
 en-us/docs/latest/user_doc/guide/task/spark.html |  63 +++++++++++++++-------
 en-us/docs/latest/user_doc/guide/task/spark.json |   2 +-
 img/tasks/demo/spark_task.png                    | Bin 0 -> 254286 bytes
 img/tasks/demo/upload_spark.png                  | Bin 0 -> 102091 bytes
 img/tasks/icons/spark.png                        | Bin 0 -> 1067 bytes
 python/_sources/tasks/condition.rst.txt          |   2 +-
 python/_sources/tasks/datax.rst.txt              |   2 +-
 python/_sources/tasks/dependent.rst.txt          |   2 +-
 python/_sources/tasks/flink.rst.txt              |   2 +-
 python/_sources/tasks/map_reduce.rst.txt         |   2 +-
 python/_sources/tasks/shell.rst.txt              |   2 +-
 python/_sources/tasks/spark.rst.txt              |   2 +-
 python/_sources/tasks/switch.rst.txt             |   2 +-
 python/_sources/tutorial.rst.txt                 |  14 ++---
 zh-cn/docs/2.0.3/user_doc/guide/task/spark.html  |  65 ++++++++++++++++-------
 zh-cn/docs/2.0.3/user_doc/guide/task/spark.json  |   2 +-
 zh-cn/docs/dev/user_doc/guide/task/spark.html    |  65 ++++++++++++++++-------
 zh-cn/docs/dev/user_doc/guide/task/spark.json    |   2 +-
 zh-cn/docs/latest/user_doc/guide/task/spark.html |  65 ++++++++++++++++-------
 zh-cn/docs/latest/user_doc/guide/task/spark.json |   2 +-
 24 files changed, 294 insertions(+), 132 deletions(-)

diff --git a/en-us/docs/2.0.3/user_doc/guide/task/spark.html b/en-us/docs/2.0.3/user_doc/guide/task/spark.html
index 591a362..bec6f0a 100644
--- a/en-us/docs/2.0.3/user_doc/guide/task/spark.html
+++ b/en-us/docs/2.0.3/user_doc/guide/task/spark.html
@@ -10,29 +10,54 @@
   <link rel="stylesheet" href="/build/vendor.23870e5.css">
 </head>
 <body>
-  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+<h2>Overview</h2>
+<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a> for more details.</p>
+<h2>Create task</h2>
 <ul>
-<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>
+<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Workflow&quot; button to enter the DAG editing page.</li>
+<li>Drag the <img src="/img/tasks/icons/spark.png" width="15"/> from the toolbar to the drawing board.</li>
 </ul>
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
+<h2>Task Parameter</h2>
 <ul>
-<li>Program type: supports JAVA, Scala and Python three languages</li>
-<li>The class of the main function: is the full path of the Spark program’s entry Main Class</li>
-<li>Main jar package: Spark jar package</li>
-<li>Deployment mode: support three modes of yarn-cluster, yarn-client and local</li>
-<li>Driver core number: You can set the number of Driver cores and the number of memory</li>
-<li>Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores</li>
-<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
-<li>Other parameters: support --jars, --files, --archives, --conf format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+<li><strong>Node name</strong>: The node name in a workflow definition is unique.</li>
+<li><strong>Run flag</strong>: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</li>
+<li><strong>Descriptive information</strong>: describe the function of the node.</li>
+<li><strong>Task priority</strong>: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</li>
+<li><strong>Worker grouping</strong>: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</li>
+<li><strong>Environment Name</strong>: Configure the environment name in which to run the script.</li>
+<li><strong>Number of failed retry attempts</strong>: The number of times the task failed to be resubmitted.</li>
+<li><strong>Failed retry interval</strong>: The time, in cents, interval for resubmitting the task after a failed task.</li>
+<li><strong>Delayed execution time</strong>: the time, in cents, that a task is delayed in execution.</li>
+<li><strong>Timeout alarm</strong>: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
+<li><strong>Program type</strong>: supports Java, Scala and Python.</li>
+<li><strong>Spark version</strong>: support Spark1 and Spark2.</li>
+<li><strong>The class of main function</strong>: is the full path of Main Class, the entry point of the Spark program.</li>
+<li><strong>Main jar package</strong>: is the Spark jar package.</li>
+<li><strong>Deployment mode</strong>: support three modes of yarn-cluster, yarn-client and local.</li>
+<li><strong>Task name</strong> (option): Spark task name.</li>
+<li><strong>Driver core number</strong>: This is used to set the number of Driver core, which can be set according to the actual production environment.</li>
+<li><strong>Driver memory number</strong>: This is used to set the number of Driver memories, which can be set according to the actual production environment.</li>
+<li><strong>Number of Executor</strong>: This is used to set the number of Executor, which can be set according to the actual production environment.</li>
+<li><strong>Executor memory number</strong>: This is used to set the number of Executor memories, which can be set according to the actual production environment.</li>
+<li><strong>Main program parameters</strong>: set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
+<li><strong>Other parameters</strong>: support <code>--jars</code>, <code>--files</code>,<code>--archives</code>, <code>--conf</code> format.</li>
+<li><strong>Resource</strong>: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.</li>
+<li><strong>Custom parameter</strong>: It is a local user-defined parameter of Spark, which will replace the content with ${variable} in the script.</li>
+<li><strong>Predecessor task</strong>: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.</li>
 </ul>
-<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same</p>
+<h2>Task Example</h2>
+<h3>Execute the WordCount program</h3>
+<p>This is a common introductory case in the Big Data ecosystem, which often applied to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text.</p>
+<h4>Uploading the main package</h4>
+<p>When using the Spark task node, you will need to use the Resource Center to upload the jar package for the executable. Refer to the <a href="../resource.md">resource center</a>.</p>
+<p>After configuring the Resource Center, you can upload the required target files directly using drag and drop.</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>Configuring Spark nodes</h4>
+<p>Simply configure the required content according to the parameter descriptions above.</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>Notice</h2>
+<p>JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no class of the main function, the others are the same.</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/2.0.3/user_doc/guide/task/spark.json b/en-us/docs/2.0.3/user_doc/guide/task/spark.json
index a14accf..3989ac6 100644
--- a/en-us/docs/2.0.3/user_doc/guide/task/spark.json
+++ b/en-us/docs/2.0.3/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK</h1>\n<ul>\n<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>\n</ul>\n<blockquote>\n<p>Drag in the toolbar<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">The task node to the drawing board, as shown in the following figure:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark-submit- [...]
+  "__html": "<h1>Spark</h1>\n<h2>Overview</h2>\n<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a> for more details.</p>\n<h2>Create task</h2>\n<ul>\n<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Work [...]
   "link": "/dist/en-us/docs/2.0.3/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/dev/user_doc/guide/task/spark.html b/en-us/docs/dev/user_doc/guide/task/spark.html
index 0235580..48fb33f 100644
--- a/en-us/docs/dev/user_doc/guide/task/spark.html
+++ b/en-us/docs/dev/user_doc/guide/task/spark.html
@@ -10,29 +10,54 @@
   <link rel="stylesheet" href="/build/vendor.23870e5.css">
 </head>
 <body>
-  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+<h2>Overview</h2>
+<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a> for more details.</p>
+<h2>Create task</h2>
 <ul>
-<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>
+<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Workflow&quot; button to enter the DAG editing page.</li>
+<li>Drag the <img src="/img/tasks/icons/spark.png" width="15"/> from the toolbar to the drawing board.</li>
 </ul>
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
+<h2>Task Parameter</h2>
 <ul>
-<li>Program type: supports JAVA, Scala and Python three languages</li>
-<li>The class of the main function: is the full path of the Spark program’s entry Main Class</li>
-<li>Main jar package: Spark jar package</li>
-<li>Deployment mode: support three modes of yarn-cluster, yarn-client and local</li>
-<li>Driver core number: You can set the number of Driver cores and the number of memory</li>
-<li>Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores</li>
-<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
-<li>Other parameters: support --jars, --files, --archives, --conf format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+<li><strong>Node name</strong>: The node name in a workflow definition is unique.</li>
+<li><strong>Run flag</strong>: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</li>
+<li><strong>Descriptive information</strong>: describe the function of the node.</li>
+<li><strong>Task priority</strong>: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</li>
+<li><strong>Worker grouping</strong>: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</li>
+<li><strong>Environment Name</strong>: Configure the environment name in which to run the script.</li>
+<li><strong>Number of failed retry attempts</strong>: The number of times the task failed to be resubmitted.</li>
+<li><strong>Failed retry interval</strong>: The time, in cents, interval for resubmitting the task after a failed task.</li>
+<li><strong>Delayed execution time</strong>: the time, in cents, that a task is delayed in execution.</li>
+<li><strong>Timeout alarm</strong>: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
+<li><strong>Program type</strong>: supports Java, Scala and Python.</li>
+<li><strong>Spark version</strong>: support Spark1 and Spark2.</li>
+<li><strong>The class of main function</strong>: is the full path of Main Class, the entry point of the Spark program.</li>
+<li><strong>Main jar package</strong>: is the Spark jar package.</li>
+<li><strong>Deployment mode</strong>: support three modes of yarn-cluster, yarn-client and local.</li>
+<li><strong>Task name</strong> (option): Spark task name.</li>
+<li><strong>Driver core number</strong>: This is used to set the number of Driver core, which can be set according to the actual production environment.</li>
+<li><strong>Driver memory number</strong>: This is used to set the number of Driver memories, which can be set according to the actual production environment.</li>
+<li><strong>Number of Executor</strong>: This is used to set the number of Executor, which can be set according to the actual production environment.</li>
+<li><strong>Executor memory number</strong>: This is used to set the number of Executor memories, which can be set according to the actual production environment.</li>
+<li><strong>Main program parameters</strong>: set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
+<li><strong>Other parameters</strong>: support <code>--jars</code>, <code>--files</code>,<code>--archives</code>, <code>--conf</code> format.</li>
+<li><strong>Resource</strong>: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.</li>
+<li><strong>Custom parameter</strong>: It is a local user-defined parameter of Spark, which will replace the content with ${variable} in the script.</li>
+<li><strong>Predecessor task</strong>: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.</li>
 </ul>
-<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same</p>
+<h2>Task Example</h2>
+<h3>Execute the WordCount program</h3>
+<p>This is a common introductory case in the Big Data ecosystem, which often applied to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text.</p>
+<h4>Uploading the main package</h4>
+<p>When using the Spark task node, you will need to use the Resource Center to upload the jar package for the executable. Refer to the <a href="../resource.md">resource center</a>.</p>
+<p>After configuring the Resource Center, you can upload the required target files directly using drag and drop.</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>Configuring Spark nodes</h4>
+<p>Simply configure the required content according to the parameter descriptions above.</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>Notice</h2>
+<p>JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no class of the main function, the others are the same.</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/dev/user_doc/guide/task/spark.json b/en-us/docs/dev/user_doc/guide/task/spark.json
index 90641ff..f89115f 100644
--- a/en-us/docs/dev/user_doc/guide/task/spark.json
+++ b/en-us/docs/dev/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK</h1>\n<ul>\n<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>\n</ul>\n<blockquote>\n<p>Drag in the toolbar<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">The task node to the drawing board, as shown in the following figure:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark-submit- [...]
+  "__html": "<h1>Spark</h1>\n<h2>Overview</h2>\n<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a> for more details.</p>\n<h2>Create task</h2>\n<ul>\n<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Work [...]
   "link": "/dist/en-us/docs/dev/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/guide/task/spark.html b/en-us/docs/latest/user_doc/guide/task/spark.html
index 591a362..bec6f0a 100644
--- a/en-us/docs/latest/user_doc/guide/task/spark.html
+++ b/en-us/docs/latest/user_doc/guide/task/spark.html
@@ -10,29 +10,54 @@
   <link rel="stylesheet" href="/build/vendor.23870e5.css">
 </head>
 <body>
-  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+  <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/en-us/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">中</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant- [...]
+<h2>Overview</h2>
+<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a> for more details.</p>
+<h2>Create task</h2>
 <ul>
-<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>
+<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Workflow&quot; button to enter the DAG editing page.</li>
+<li>Drag the <img src="/img/tasks/icons/spark.png" width="15"/> from the toolbar to the drawing board.</li>
 </ul>
-<blockquote>
-<p>Drag in the toolbar<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">The task node to the drawing board, as shown in the following figure:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark-submit-en.png" width="80%" />
- </p>
+<h2>Task Parameter</h2>
 <ul>
-<li>Program type: supports JAVA, Scala and Python three languages</li>
-<li>The class of the main function: is the full path of the Spark program’s entry Main Class</li>
-<li>Main jar package: Spark jar package</li>
-<li>Deployment mode: support three modes of yarn-cluster, yarn-client and local</li>
-<li>Driver core number: You can set the number of Driver cores and the number of memory</li>
-<li>Number of Executors: You can set the number of Executors, the number of Executor memory, and the number of Executor cores</li>
-<li>Command line parameters: Set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
-<li>Other parameters: support --jars, --files, --archives, --conf format</li>
-<li>Resource: If the resource file is referenced in other parameters, you need to select and specify in the resource</li>
-<li>User-defined parameter: It is a user-defined parameter of the MR part, which will replace the content with ${variable} in the script</li>
+<li><strong>Node name</strong>: The node name in a workflow definition is unique.</li>
+<li><strong>Run flag</strong>: Identifies whether this node can be scheduled normally, if it does not need to be executed, you can turn on the prohibition switch.</li>
+<li><strong>Descriptive information</strong>: describe the function of the node.</li>
+<li><strong>Task priority</strong>: When the number of worker threads is insufficient, they are executed in order from high to low, and when the priority is the same, they are executed according to the first-in first-out principle.</li>
+<li><strong>Worker grouping</strong>: Tasks are assigned to the machines of the worker group to execute. If Default is selected, a worker machine will be randomly selected for execution.</li>
+<li><strong>Environment Name</strong>: Configure the environment name in which to run the script.</li>
+<li><strong>Number of failed retry attempts</strong>: The number of times the task failed to be resubmitted.</li>
+<li><strong>Failed retry interval</strong>: The time, in cents, interval for resubmitting the task after a failed task.</li>
+<li><strong>Delayed execution time</strong>: the time, in cents, that a task is delayed in execution.</li>
+<li><strong>Timeout alarm</strong>: Check the timeout alarm and timeout failure. When the task exceeds the &quot;timeout period&quot;, an alarm email will be sent and the task execution will fail.</li>
+<li><strong>Program type</strong>: supports Java, Scala and Python.</li>
+<li><strong>Spark version</strong>: support Spark1 and Spark2.</li>
+<li><strong>The class of main function</strong>: is the full path of Main Class, the entry point of the Spark program.</li>
+<li><strong>Main jar package</strong>: is the Spark jar package.</li>
+<li><strong>Deployment mode</strong>: support three modes of yarn-cluster, yarn-client and local.</li>
+<li><strong>Task name</strong> (option): Spark task name.</li>
+<li><strong>Driver core number</strong>: This is used to set the number of Driver core, which can be set according to the actual production environment.</li>
+<li><strong>Driver memory number</strong>: This is used to set the number of Driver memories, which can be set according to the actual production environment.</li>
+<li><strong>Number of Executor</strong>: This is used to set the number of Executor, which can be set according to the actual production environment.</li>
+<li><strong>Executor memory number</strong>: This is used to set the number of Executor memories, which can be set according to the actual production environment.</li>
+<li><strong>Main program parameters</strong>: set the input parameters of the Spark program and support the substitution of custom parameter variables.</li>
+<li><strong>Other parameters</strong>: support <code>--jars</code>, <code>--files</code>,<code>--archives</code>, <code>--conf</code> format.</li>
+<li><strong>Resource</strong>: Refers to the list of resource files that need to be called in the script, and the files uploaded or created by the resource center-file management.</li>
+<li><strong>Custom parameter</strong>: It is a local user-defined parameter of Spark, which will replace the content with ${variable} in the script.</li>
+<li><strong>Predecessor task</strong>: Selecting a predecessor task for the current task will set the selected predecessor task as upstream of the current task.</li>
 </ul>
-<p>Note: JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no main function class, and the others are the same</p>
+<h2>Task Example</h2>
+<h3>Execute the WordCount program</h3>
+<p>This is a common introductory case in the Big Data ecosystem, which often applied to computational frameworks such as MapReduce, Flink and Spark. The main purpose is to count the number of identical words in the input text.</p>
+<h4>Uploading the main package</h4>
+<p>When using the Spark task node, you will need to use the Resource Center to upload the jar package for the executable. Refer to the <a href="../resource.md">resource center</a>.</p>
+<p>After configuring the Resource Center, you can upload the required target files directly using drag and drop.</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>Configuring Spark nodes</h4>
+<p>Simply configure the required content according to the parameter descriptions above.</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>Notice</h2>
+<p>JAVA and Scala are only used for identification, there is no difference, if it is Spark developed by Python, there is no class of the main function, the others are the same.</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>About us</h3><h4>Do you need feedback? Please contact us through the following ways.</h4></div><div class="contact-container"><ul><li><a href="/en-us/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>Email List</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/en-us/docs/latest/user_doc/guide/task/spark.json b/en-us/docs/latest/user_doc/guide/task/spark.json
index a14accf..3989ac6 100644
--- a/en-us/docs/latest/user_doc/guide/task/spark.json
+++ b/en-us/docs/latest/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK</h1>\n<ul>\n<li>Through the SPARK node, you can directly execute the SPARK program. For the spark node, the worker will use the <code>spark-submit</code> method to submit tasks</li>\n</ul>\n<blockquote>\n<p>Drag in the toolbar<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">The task node to the drawing board, as shown in the following figure:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark-submit- [...]
+  "__html": "<h1>Spark</h1>\n<h2>Overview</h2>\n<p>Spark task type for executing Spark programs. For Spark nodes, the worker submits the task by using the spark command <code>spark submit</code>. See <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a> for more details.</p>\n<h2>Create task</h2>\n<ul>\n<li>Click Project Management -&gt; Project Name -&gt; Workflow Definition, and click the &quot;Create Work [...]
   "link": "/dist/en-us/docs/2.0.3/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/img/tasks/demo/spark_task.png b/img/tasks/demo/spark_task.png
new file mode 100644
index 0000000..cdd66a5
Binary files /dev/null and b/img/tasks/demo/spark_task.png differ
diff --git a/img/tasks/demo/upload_spark.png b/img/tasks/demo/upload_spark.png
new file mode 100644
index 0000000..d644ffe
Binary files /dev/null and b/img/tasks/demo/upload_spark.png differ
diff --git a/img/tasks/icons/spark.png b/img/tasks/icons/spark.png
new file mode 100644
index 0000000..e9ff012
Binary files /dev/null and b/img/tasks/icons/spark.png differ
diff --git a/python/_sources/tasks/condition.rst.txt b/python/_sources/tasks/condition.rst.txt
index 57aff8e..20b0350 100644
--- a/python/_sources/tasks/condition.rst.txt
+++ b/python/_sources/tasks/condition.rst.txt
@@ -23,7 +23,7 @@ A condition task type's example and dive into information of **PyDolphinSchedule
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_condition_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_condition_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/datax.rst.txt b/python/_sources/tasks/datax.rst.txt
index da9e4ce..c077269 100644
--- a/python/_sources/tasks/datax.rst.txt
+++ b/python/_sources/tasks/datax.rst.txt
@@ -23,7 +23,7 @@ A DataX task type's example and dive into information of **PyDolphinScheduler**.
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_datax_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_datax_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/dependent.rst.txt b/python/_sources/tasks/dependent.rst.txt
index fb3b3e2..fe26d0f 100644
--- a/python/_sources/tasks/dependent.rst.txt
+++ b/python/_sources/tasks/dependent.rst.txt
@@ -23,7 +23,7 @@ A dependent task type's example and dive into information of **PyDolphinSchedule
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_dependent_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_dependent_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/flink.rst.txt b/python/_sources/tasks/flink.rst.txt
index 9eadd8f..8db9ac2 100644
--- a/python/_sources/tasks/flink.rst.txt
+++ b/python/_sources/tasks/flink.rst.txt
@@ -23,7 +23,7 @@ A flink task type's example and dive into information of **PyDolphinScheduler**.
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_flink_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_flink_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/map_reduce.rst.txt b/python/_sources/tasks/map_reduce.rst.txt
index c46f4c6..068b8d8 100644
--- a/python/_sources/tasks/map_reduce.rst.txt
+++ b/python/_sources/tasks/map_reduce.rst.txt
@@ -24,7 +24,7 @@ A Map Reduce task type's example and dive into information of **PyDolphinSchedul
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_map_reduce_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_map_reduce_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/shell.rst.txt b/python/_sources/tasks/shell.rst.txt
index 6497f0f..5ce16c3 100644
--- a/python/_sources/tasks/shell.rst.txt
+++ b/python/_sources/tasks/shell.rst.txt
@@ -23,7 +23,7 @@ A shell task type's example and dive into information of **PyDolphinScheduler**.
 Example
 -------
 
-.. literalinclude:: ../../../examples/tutorial.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/tutorial.py
    :start-after: [start workflow_declare]
    :end-before: [end task_relation_declare]
 
diff --git a/python/_sources/tasks/spark.rst.txt b/python/_sources/tasks/spark.rst.txt
index 10fd672..cdb5902 100644
--- a/python/_sources/tasks/spark.rst.txt
+++ b/python/_sources/tasks/spark.rst.txt
@@ -23,7 +23,7 @@ A spark task type's example and dive into information of **PyDolphinScheduler**.
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_spark_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_spark_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tasks/switch.rst.txt b/python/_sources/tasks/switch.rst.txt
index 271050e..d8b34a4 100644
--- a/python/_sources/tasks/switch.rst.txt
+++ b/python/_sources/tasks/switch.rst.txt
@@ -23,7 +23,7 @@ A switch task type's example and dive into information of **PyDolphinScheduler**
 Example
 -------
 
-.. literalinclude:: ../../../examples/task_switch_example.py
+.. literalinclude:: ../../../src/pydolphinscheduler/examples/task_switch_example.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
diff --git a/python/_sources/tutorial.rst.txt b/python/_sources/tutorial.rst.txt
index a7c9e58..83c5746 100644
--- a/python/_sources/tutorial.rst.txt
+++ b/python/_sources/tutorial.rst.txt
@@ -29,7 +29,7 @@ Overview of Tutorial
 Here have an overview of our tutorial, and it look a little complex but do not
 worry about that because we explain this example below as detailed as possible.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :start-after: [start tutorial]
    :end-before: [end tutorial]
 
@@ -41,7 +41,7 @@ like other Python package. We just create a minimum demo here, so we just import
 :class:`pydolphinscheduler.core.process_definition` and
 :class:`pydolphinscheduler.tasks.shell`.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :start-after: [start package_import]
    :end-before: [end package_import]
 
@@ -60,7 +60,7 @@ interval and schedule start_time, and argument `tenant` which changing workflow'
 task running user in the worker, :ref:`section tenant <concept:tenant>` in *PyDolphinScheduler*
 :doc:`concept` page have more detail information.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :start-after: [start workflow_declare]
    :end-before: [end workflow_declare]
 
@@ -77,7 +77,7 @@ Here we declare four tasks, and bot of them are simple task of
 Beside the argument `command`, we also need setting argument `name` for each task *(not
 only shell task, `name` is required for each type of task)*.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :dedent: 0
    :start-after: [start task_declare]
    :end-before: [end task_declare]
@@ -99,7 +99,7 @@ In this example, we set task `task_parent` is the upstream task of task
 `task_child_one` and `task_child_two`, and task `task_union` is the downstream
 task of both these two task.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :dedent: 0
    :start-after: [start task_relation_declare]
    :end-before: [end task_relation_declare]
@@ -124,7 +124,7 @@ to the daemon, and set the schedule time we just declare in `process definition
 Now, we could run the Python code like other Python script, for the basic usage run
 :code:`python tutorial.py` to trigger and run it.
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :dedent: 0
    :start-after: [start submit_or_run]
    :end-before: [end submit_or_run]
@@ -142,7 +142,7 @@ After we run the tutorial code, you could login Apache DolphinScheduler web UI,
 go and see the `DolphinScheduler project page`_. they is a new process definition be
 created and named "Tutorial". It create by *PyDolphinScheduler* and the DAG graph as below
 
-.. literalinclude:: ../../examples/tutorial.py
+.. literalinclude:: ../../src/pydolphinscheduler/examples/tutorial.py
    :language: text
    :lines: 24-28
 
diff --git a/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html b/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html
index c4fadfe..fef20b8 100644
--- a/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html
+++ b/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html
@@ -11,28 +11,57 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/zh-cn/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">En</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant [...]
+<h2>综述</h2>
+<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a>。</p>
+<h2>创建任务</h2>
 <ul>
-<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>
+<li>
+<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>
+</li>
+<li>
+<p>拖动工具栏的 <img src="/img/tasks/icons/spark.png" width="15"/> 任务节点到画板中。</p>
+</li>
 </ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark_edit.png" width="80%" />
- </p>
+<h2>任务参数</h2>
 <ul>
-<li>程序类型:支持JAVA、Scala和Python三种语言</li>
-<li>主函数的class:是Spark程序的入口Main Class的全路径</li>
-<li>主jar包:是Spark的jar包</li>
-<li>部署方式:支持yarn-cluster、yarn-client和local三种模式</li>
-<li>Driver内核数:可以设置Driver内核数及内存数</li>
-<li>Executor数量:可以设置Executor数量、Executor内存数和Executor内核数</li>
-<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
-<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
-<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
+<li>节点名称:设置任务的名称。一个工作流定义中的节点名称是唯一的。</li>
+<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
+<li>描述:描述该节点的功能。</li>
+<li>任务优先级:worker 线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</li>
+<li>Worker 分组:任务分配给 worker 组的机器执行,选择 Default 会随机选择一台 worker 机执行。</li>
+<li>环境名称:配置运行脚本的环境。</li>
+<li>失败重试次数:任务失败重新提交的次数。</li>
+<li>失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。</li>
+<li>延迟执行时间:任务延迟执行的时间,以分为单位。</li>
+<li>超时警告:勾选超时警告、超时失败,当任务超过“超时时长”后,会发送告警邮件并且任务执行失败。</li>
+<li>程序类型:支持 Java、Scala 和 Python 三种语言。</li>
+<li>Spark 版本:支持 Spark1 和 Spark2。</li>
+<li>主函数的 Class:Spark 程序的入口 Main class 的全路径。</li>
+<li>主程序包:执行 Spark 程序的 jar 包(通过资源中心上传)。</li>
+<li>部署方式:支持 yarn-clusetr、yarn-client 和 local 三种模式。</li>
+<li>任务名称(可选):Spark 程序的名称。</li>
+<li>Driver 核心数:用于设置 Driver 内核数,可根据实际生产环境设置对应的核心数。</li>
+<li>Driver 内存数:用于设置 Driver 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 数量:用于设置 Executor 的数量,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 内存数:用于设置 Executor 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>主程序参数:设置 Spark 程序的输入参数,支持自定义参数变量的替换。</li>
+<li>选项参数:支持 <code>--jar</code>、<code>--files</code>、<code>--archives</code>、<code>--conf</code> 格式。</li>
+<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定。</li>
+<li>自定义参数:是 Spark 局部的用户自定义参数,会替换脚本中以 ${变量} 的内容。</li>
+<li>前置任务:选择当前任务的前置任务,会将被选择的前置任务设置为当前任务的上游。</li>
 </ul>
-<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样</p>
+<h2>任务样例</h2>
+<h3>执行 WordCount 程序</h3>
+<p>本案例为大数据生态中常见的入门案例,常应用于 MapReduce、Flink、Spark 等计算框架。主要为统计输入的文本中,相同的单词的数量有多少。</p>
+<h4>上传主程序包</h4>
+<p>在使用 Spark 任务节点时,需要利用资源中心上传执行程序的 jar 包,可参考<a href="../resource.md">资源中心</a>。</p>
+<p>当配置完成资源中心之后,直接使用拖拽的方式,即可上传所需目标文件。</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>配置 Spark 节点</h4>
+<p>根据上述参数说明,配置所需的内容即可。</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>注意事项:</h2>
+<p>注意:JAVA 和 Scala 只是用来标识,没有区别,如果是 Python 开发的 Spark 则没有主函数的 class,其他都是一样。</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>联系我们</h3><h4>有问题需要反馈?请通过以下方式联系我们。</h4></div><div class="contact-container"><ul><li><a href="/zh-cn/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>邮件列表</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterblue.png"/><p [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/zh-cn/docs/2.0.3/user_doc/guide/task/spark.json b/zh-cn/docs/2.0.3/user_doc/guide/task/spark.json
index 3d326c8..1b06c92 100644
--- a/zh-cn/docs/2.0.3/user_doc/guide/task/spark.json
+++ b/zh-cn/docs/2.0.3/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK节点</h1>\n<ul>\n<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>\n</ul>\n<blockquote>\n<p>拖动工具栏中的<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">任务节点到画板中,如下图所示:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark_edit.png\" width=\"80%\" />\n </p>\n<ul>\n<li>程序类型:支持JAVA、Scala和Python三种语言</li>\n<li>主函数的class:是Spark程序的入口Main Class的全路径</li>\n<li>主jar包:是Spark的jar包</li>\n [...]
+  "__html": "<h1>SPARK节点</h1>\n<h2>综述</h2>\n<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a>。</p>\n<h2>创建任务</h2>\n<ul>\n<li>\n<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>\n</li>\n<li>\n<p>拖动工具栏的 <img src=\"/img/tasks/icons/spark.png\" width=\"15\"/> 任务节点到画板中。</p>\n</li>\n</ul>\n<h2>任务参 [...]
   "link": "/dist/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/dev/user_doc/guide/task/spark.html b/zh-cn/docs/dev/user_doc/guide/task/spark.html
index 5306ccf..f7080e1 100644
--- a/zh-cn/docs/dev/user_doc/guide/task/spark.html
+++ b/zh-cn/docs/dev/user_doc/guide/task/spark.html
@@ -11,28 +11,57 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/zh-cn/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">En</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant [...]
+<h2>综述</h2>
+<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a>。</p>
+<h2>创建任务</h2>
 <ul>
-<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>
+<li>
+<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>
+</li>
+<li>
+<p>拖动工具栏的 <img src="/img/tasks/icons/spark.png" width="15"/> 任务节点到画板中。</p>
+</li>
 </ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark_edit.png" width="80%" />
- </p>
+<h2>任务参数</h2>
 <ul>
-<li>程序类型:支持JAVA、Scala和Python三种语言</li>
-<li>主函数的class:是Spark程序的入口Main Class的全路径</li>
-<li>主jar包:是Spark的jar包</li>
-<li>部署方式:支持yarn-cluster、yarn-client和local三种模式</li>
-<li>Driver内核数:可以设置Driver内核数及内存数</li>
-<li>Executor数量:可以设置Executor数量、Executor内存数和Executor内核数</li>
-<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
-<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
-<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
+<li>节点名称:设置任务的名称。一个工作流定义中的节点名称是唯一的。</li>
+<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
+<li>描述:描述该节点的功能。</li>
+<li>任务优先级:worker 线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</li>
+<li>Worker 分组:任务分配给 worker 组的机器执行,选择 Default 会随机选择一台 worker 机执行。</li>
+<li>环境名称:配置运行脚本的环境。</li>
+<li>失败重试次数:任务失败重新提交的次数。</li>
+<li>失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。</li>
+<li>延迟执行时间:任务延迟执行的时间,以分为单位。</li>
+<li>超时警告:勾选超时警告、超时失败,当任务超过“超时时长”后,会发送告警邮件并且任务执行失败。</li>
+<li>程序类型:支持 Java、Scala 和 Python 三种语言。</li>
+<li>Spark 版本:支持 Spark1 和 Spark2。</li>
+<li>主函数的 Class:Spark 程序的入口 Main class 的全路径。</li>
+<li>主程序包:执行 Spark 程序的 jar 包(通过资源中心上传)。</li>
+<li>部署方式:支持 yarn-clusetr、yarn-client 和 local 三种模式。</li>
+<li>任务名称(可选):Spark 程序的名称。</li>
+<li>Driver 核心数:用于设置 Driver 内核数,可根据实际生产环境设置对应的核心数。</li>
+<li>Driver 内存数:用于设置 Driver 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 数量:用于设置 Executor 的数量,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 内存数:用于设置 Executor 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>主程序参数:设置 Spark 程序的输入参数,支持自定义参数变量的替换。</li>
+<li>选项参数:支持 <code>--jar</code>、<code>--files</code>、<code>--archives</code>、<code>--conf</code> 格式。</li>
+<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定。</li>
+<li>自定义参数:是 Spark 局部的用户自定义参数,会替换脚本中以 ${变量} 的内容。</li>
+<li>前置任务:选择当前任务的前置任务,会将被选择的前置任务设置为当前任务的上游。</li>
 </ul>
-<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样</p>
+<h2>任务样例</h2>
+<h3>执行 WordCount 程序</h3>
+<p>本案例为大数据生态中常见的入门案例,常应用于 MapReduce、Flink、Spark 等计算框架。主要为统计输入的文本中,相同的单词的数量有多少。</p>
+<h4>上传主程序包</h4>
+<p>在使用 Spark 任务节点时,需要利用资源中心上传执行程序的 jar 包,可参考<a href="../resource.md">资源中心</a>。</p>
+<p>当配置完成资源中心之后,直接使用拖拽的方式,即可上传所需目标文件。</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>配置 Spark 节点</h4>
+<p>根据上述参数说明,配置所需的内容即可。</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>注意事项:</h2>
+<p>注意:JAVA 和 Scala 只是用来标识,没有区别,如果是 Python 开发的 Spark 则没有主函数的 class,其他都是一样。</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>联系我们</h3><h4>有问题需要反馈?请通过以下方式联系我们。</h4></div><div class="contact-container"><ul><li><a href="/zh-cn/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>邮件列表</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterblue.png"/><p [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/zh-cn/docs/dev/user_doc/guide/task/spark.json b/zh-cn/docs/dev/user_doc/guide/task/spark.json
index e614c93..a8552ee 100644
--- a/zh-cn/docs/dev/user_doc/guide/task/spark.json
+++ b/zh-cn/docs/dev/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK节点</h1>\n<ul>\n<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>\n</ul>\n<blockquote>\n<p>拖动工具栏中的<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">任务节点到画板中,如下图所示:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark_edit.png\" width=\"80%\" />\n </p>\n<ul>\n<li>程序类型:支持JAVA、Scala和Python三种语言</li>\n<li>主函数的class:是Spark程序的入口Main Class的全路径</li>\n<li>主jar包:是Spark的jar包</li>\n [...]
+  "__html": "<h1>SPARK节点</h1>\n<h2>综述</h2>\n<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a>。</p>\n<h2>创建任务</h2>\n<ul>\n<li>\n<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>\n</li>\n<li>\n<p>拖动工具栏的 <img src=\"/img/tasks/icons/spark.png\" width=\"15\"/> 任务节点到画板中。</p>\n</li>\n</ul>\n<h2>任务参 [...]
   "link": "/dist/zh-cn/docs/dev/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/latest/user_doc/guide/task/spark.html b/zh-cn/docs/latest/user_doc/guide/task/spark.html
index c4fadfe..fef20b8 100644
--- a/zh-cn/docs/latest/user_doc/guide/task/spark.html
+++ b/zh-cn/docs/latest/user_doc/guide/task/spark.html
@@ -11,28 +11,57 @@
 </head>
 <body>
   <div id="root"><div class="md2html docs-page" data-reactroot=""><header class="header-container header-container-dark"><div class="header-body"><span class="mobile-menu-btn mobile-menu-btn-dark"></span><a href="/zh-cn/index.html"><img class="logo" src="/img/hlogo_white.svg"/></a><div class="search search-dark"><span class="icon-search"></span></div><span class="language-switch language-switch-dark">En</span><div class="header-menu"><div><ul class="ant-menu whiteClass ant-menu-light ant [...]
+<h2>综述</h2>
+<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href="https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit">spark-submit</a>。</p>
+<h2>创建任务</h2>
 <ul>
-<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>
+<li>
+<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>
+</li>
+<li>
+<p>拖动工具栏的 <img src="/img/tasks/icons/spark.png" width="15"/> 任务节点到画板中。</p>
+</li>
 </ul>
-<blockquote>
-<p>拖动工具栏中的<img src="https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png" alt="PNG">任务节点到画板中,如下图所示:</p>
-</blockquote>
-<p align="center">
-   <img src="/img/spark_edit.png" width="80%" />
- </p>
+<h2>任务参数</h2>
 <ul>
-<li>程序类型:支持JAVA、Scala和Python三种语言</li>
-<li>主函数的class:是Spark程序的入口Main Class的全路径</li>
-<li>主jar包:是Spark的jar包</li>
-<li>部署方式:支持yarn-cluster、yarn-client和local三种模式</li>
-<li>Driver内核数:可以设置Driver内核数及内存数</li>
-<li>Executor数量:可以设置Executor数量、Executor内存数和Executor内核数</li>
-<li>命令行参数:是设置Spark程序的输入参数,支持自定义参数变量的替换。</li>
-<li>其他参数:支持 --jars、--files、--archives、--conf格式</li>
-<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定</li>
-<li>自定义参数:是MR局部的用户自定义参数,会替换脚本中以${变量}的内容</li>
+<li>节点名称:设置任务的名称。一个工作流定义中的节点名称是唯一的。</li>
+<li>运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。</li>
+<li>描述:描述该节点的功能。</li>
+<li>任务优先级:worker 线程数不足时,根据优先级从高到低依次执行,优先级一样时根据先进先出原则执行。</li>
+<li>Worker 分组:任务分配给 worker 组的机器执行,选择 Default 会随机选择一台 worker 机执行。</li>
+<li>环境名称:配置运行脚本的环境。</li>
+<li>失败重试次数:任务失败重新提交的次数。</li>
+<li>失败重试间隔:任务失败重新提交任务的时间间隔,以分为单位。</li>
+<li>延迟执行时间:任务延迟执行的时间,以分为单位。</li>
+<li>超时警告:勾选超时警告、超时失败,当任务超过“超时时长”后,会发送告警邮件并且任务执行失败。</li>
+<li>程序类型:支持 Java、Scala 和 Python 三种语言。</li>
+<li>Spark 版本:支持 Spark1 和 Spark2。</li>
+<li>主函数的 Class:Spark 程序的入口 Main class 的全路径。</li>
+<li>主程序包:执行 Spark 程序的 jar 包(通过资源中心上传)。</li>
+<li>部署方式:支持 yarn-clusetr、yarn-client 和 local 三种模式。</li>
+<li>任务名称(可选):Spark 程序的名称。</li>
+<li>Driver 核心数:用于设置 Driver 内核数,可根据实际生产环境设置对应的核心数。</li>
+<li>Driver 内存数:用于设置 Driver 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 数量:用于设置 Executor 的数量,可根据实际生产环境设置对应的内存数。</li>
+<li>Executor 内存数:用于设置 Executor 内存数,可根据实际生产环境设置对应的内存数。</li>
+<li>主程序参数:设置 Spark 程序的输入参数,支持自定义参数变量的替换。</li>
+<li>选项参数:支持 <code>--jar</code>、<code>--files</code>、<code>--archives</code>、<code>--conf</code> 格式。</li>
+<li>资源:如果其他参数中引用了资源文件,需要在资源中选择指定。</li>
+<li>自定义参数:是 Spark 局部的用户自定义参数,会替换脚本中以 ${变量} 的内容。</li>
+<li>前置任务:选择当前任务的前置任务,会将被选择的前置任务设置为当前任务的上游。</li>
 </ul>
-<p>注意:JAVA和Scala只是用来标识,没有区别,如果是Python开发的Spark则没有主函数的class,其他都是一样</p>
+<h2>任务样例</h2>
+<h3>执行 WordCount 程序</h3>
+<p>本案例为大数据生态中常见的入门案例,常应用于 MapReduce、Flink、Spark 等计算框架。主要为统计输入的文本中,相同的单词的数量有多少。</p>
+<h4>上传主程序包</h4>
+<p>在使用 Spark 任务节点时,需要利用资源中心上传执行程序的 jar 包,可参考<a href="../resource.md">资源中心</a>。</p>
+<p>当配置完成资源中心之后,直接使用拖拽的方式,即可上传所需目标文件。</p>
+<p><img src="/img/tasks/demo/upload_spark.png" alt="resource_upload"></p>
+<h4>配置 Spark 节点</h4>
+<p>根据上述参数说明,配置所需的内容即可。</p>
+<p><img src="/img/tasks/demo/spark_task.png" alt="demo-spark-simple"></p>
+<h2>注意事项:</h2>
+<p>注意:JAVA 和 Scala 只是用来标识,没有区别,如果是 Python 开发的 Spark 则没有主函数的 class,其他都是一样。</p>
 </div></section><footer class="footer-container"><div class="footer-body"><div><h3>联系我们</h3><h4>有问题需要反馈?请通过以下方式联系我们。</h4></div><div class="contact-container"><ul><li><a href="/zh-cn/community/development/subscribe.html"><img class="img-base" src="/img/emailgray.png"/><img class="img-change" src="/img/emailblue.png"/><p>邮件列表</p></a></li><li><a href="https://twitter.com/dolphinschedule"><img class="img-base" src="/img/twittergray.png"/><img class="img-change" src="/img/twitterblue.png"/><p [...]
   <script src="//cdn.jsdelivr.net/npm/react@15.6.2/dist/react-with-addons.min.js"></script>
   <script src="//cdn.jsdelivr.net/npm/react-dom@15.6.2/dist/react-dom.min.js"></script>
diff --git a/zh-cn/docs/latest/user_doc/guide/task/spark.json b/zh-cn/docs/latest/user_doc/guide/task/spark.json
index 3d326c8..1b06c92 100644
--- a/zh-cn/docs/latest/user_doc/guide/task/spark.json
+++ b/zh-cn/docs/latest/user_doc/guide/task/spark.json
@@ -1,6 +1,6 @@
 {
   "filename": "spark.md",
-  "__html": "<h1>SPARK节点</h1>\n<ul>\n<li>通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用<code>spark-submit</code>方式提交任务</li>\n</ul>\n<blockquote>\n<p>拖动工具栏中的<img src=\"https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png\" alt=\"PNG\">任务节点到画板中,如下图所示:</p>\n</blockquote>\n<p align=\"center\">\n   <img src=\"/img/spark_edit.png\" width=\"80%\" />\n </p>\n<ul>\n<li>程序类型:支持JAVA、Scala和Python三种语言</li>\n<li>主函数的class:是Spark程序的入口Main Class的全路径</li>\n<li>主jar包:是Spark的jar包</li>\n [...]
+  "__html": "<h1>SPARK节点</h1>\n<h2>综述</h2>\n<p>Spark  任务类型,用于执行 Spark 程序。对于 Spark 节点,worker 会通关使用 spark 命令 <code>spark submit</code> 方式提交任务。更多详情查看 <a href=\"https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit\">spark-submit</a>。</p>\n<h2>创建任务</h2>\n<ul>\n<li>\n<p>点击项目管理 -&gt; 项目名称 -&gt; 工作流定义,点击”创建工作流”按钮,进入 DAG 编辑页面:</p>\n</li>\n<li>\n<p>拖动工具栏的 <img src=\"/img/tasks/icons/spark.png\" width=\"15\"/> 任务节点到画板中。</p>\n</li>\n</ul>\n<h2>任务参 [...]
   "link": "/dist/zh-cn/docs/2.0.3/user_doc/guide/task/spark.html",
   "meta": {}
 }
\ No newline at end of file