You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by gi...@apache.org on 2021/05/14 09:26:06 UTC

[dolphinscheduler-website] branch asf-site updated: Automated deployment: 406cbec5489d8bc8fa7ffd96e34cbd2eb30cb24c

This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 495d45b  Automated deployment: 406cbec5489d8bc8fa7ffd96e34cbd2eb30cb24c
495d45b is described below

commit 495d45b1cb57eb553b12262f02d7f66c712e40da
Author: github-actions[bot] <gi...@users.noreply.github.com>
AuthorDate: Fri May 14 09:25:56 2021 +0000

    Automated deployment: 406cbec5489d8bc8fa7ffd96e34cbd2eb30cb24c
---
 en-us/docs/1.3.6/user_doc/docker-deployment.html  | 14 ++++++++++----
 en-us/docs/1.3.6/user_doc/docker-deployment.json  |  2 +-
 en-us/docs/latest/user_doc/docker-deployment.html | 14 ++++++++++----
 en-us/docs/latest/user_doc/docker-deployment.json |  2 +-
 zh-cn/docs/1.3.6/user_doc/docker-deployment.html  | 18 ++++++++++++------
 zh-cn/docs/1.3.6/user_doc/docker-deployment.json  |  2 +-
 zh-cn/docs/latest/user_doc/docker-deployment.html | 18 ++++++++++++------
 zh-cn/docs/latest/user_doc/docker-deployment.json |  2 +-
 8 files changed, 48 insertions(+), 24 deletions(-)

diff --git a/en-us/docs/1.3.6/user_doc/docker-deployment.html b/en-us/docs/1.3.6/user_doc/docker-deployment.html
index 1f3ee84..9ed6eb2 100644
--- a/en-us/docs/1.3.6/user_doc/docker-deployment.html
+++ b/en-us/docs/1.3.6/user_doc/docker-deployment.html
@@ -598,13 +598,13 @@ RUN apt-get update &amp;&amp; \
 <p>Copy the Spark 2.4.7 release binary into Docker container</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <p>Because the volume <code>dolphinscheduler-shared-local</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost</p>
 <ol start="4">
 <li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
@@ -648,12 +648,12 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>Copy the Spark 3.1.1 release binary into Docker container</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <ol start="4">
 <li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
@@ -668,6 +668,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
 <h3>How to support shared storage between Master, Worker and Api server?</h3>
+<blockquote>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 1 and 2 can be skipped directly, and execute the command like <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> to put Hadoop into the shared directory <code>/opt/soft</code> in the container</p>
+</blockquote>
 <p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
 <ol>
 <li>Modify the volume <code>dolphinscheduler-shared-local</code> to support nfs in <code>docker-compose.yml</code></li>
@@ -691,6 +694,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </li>
 </ol>
 <h3>How to support local file resource storage instead of HDFS and S3?</h3>
+<blockquote>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 2 can be skipped directly</p>
+</blockquote>
 <ol>
 <li>Modify the following environment variables in <code>config.env.sh</code>:</li>
 </ol>
diff --git a/en-us/docs/1.3.6/user_doc/docker-deployment.json b/en-us/docs/1.3.6/user_doc/docker-deployment.json
index 6e3889e..d696c32 100644
--- a/en-us/docs/1.3.6/user_doc/docker-deployment.json
+++ b/en-us/docs/1.3.6/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
+  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
   "link": "/dist/en-us/docs/1.3.6/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/en-us/docs/latest/user_doc/docker-deployment.html b/en-us/docs/latest/user_doc/docker-deployment.html
index 1f3ee84..9ed6eb2 100644
--- a/en-us/docs/latest/user_doc/docker-deployment.html
+++ b/en-us/docs/latest/user_doc/docker-deployment.html
@@ -598,13 +598,13 @@ RUN apt-get update &amp;&amp; \
 <p>Copy the Spark 2.4.7 release binary into Docker container</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <p>Because the volume <code>dolphinscheduler-shared-local</code> is mounted on <code>/opt/soft</code>, all files in <code>/opt/soft</code> will not be lost</p>
 <ol start="4">
 <li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
@@ -648,12 +648,12 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>Copy the Spark 3.1.1 release binary into Docker container</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <ol start="4">
 <li>Attach the container and ensure that <code>SPARK_HOME2</code> exists</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
@@ -668,6 +668,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>Check whether the task log contains the output like <code>Pi is roughly 3.146015</code></p>
 <h3>How to support shared storage between Master, Worker and Api server?</h3>
+<blockquote>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 1 and 2 can be skipped directly, and execute the command like <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> to put Hadoop into the shared directory <code>/opt/soft</code> in the container</p>
+</blockquote>
 <p>For example, Master, Worker and Api server may use Hadoop at the same time</p>
 <ol>
 <li>Modify the volume <code>dolphinscheduler-shared-local</code> to support nfs in <code>docker-compose.yml</code></li>
@@ -691,6 +694,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </li>
 </ol>
 <h3>How to support local file resource storage instead of HDFS and S3?</h3>
+<blockquote>
+<p><strong>Note</strong>: If it is deployed on a single machine by <code>docker-compose</code>, step 2 can be skipped directly</p>
+</blockquote>
 <ol>
 <li>Modify the following environment variables in <code>config.env.sh</code>:</li>
 </ol>
diff --git a/en-us/docs/latest/user_doc/docker-deployment.json b/en-us/docs/latest/user_doc/docker-deployment.json
index 6e3889e..d696c32 100644
--- a/en-us/docs/latest/user_doc/docker-deployment.json
+++ b/en-us/docs/latest/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
+  "__html": "<h1>QuickStart in Docker</h1>\n<h2>Prerequisites</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>How to use this Docker image</h2>\n<p>Here're 3 ways to quickly install DolphinScheduler</p>\n<h3>The First Way: Start a DolphinScheduler by docker-compose (recommended)</h3>\n<p>In this way, you need to install <a href=\"https://docs.docker.co [...]
   "link": "/dist/en-us/docs/1.3.6/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/1.3.6/user_doc/docker-deployment.html b/zh-cn/docs/1.3.6/user_doc/docker-deployment.html
index 33330ea..78c0ad3 100644
--- a/zh-cn/docs/1.3.6/user_doc/docker-deployment.html
+++ b/zh-cn/docs/1.3.6/user_doc/docker-deployment.html
@@ -598,17 +598,17 @@ RUN apt-get update &amp;&amp; \
 <p>复制 Spark 2.4.7 二进制包到 Docker 容器中</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <p>因为存储卷 <code>dolphinscheduler-shared-local</code> 被挂载到 <code>/opt/soft</code>, 因此 <code>/opt/soft</code> 中的所有文件都不会丢失</p>
 <ol start="4">
 <li>登录到容器并确保 <code>SPARK_HOME2</code> 存在</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
-ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># 或者 mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
 <p>如果一切执行正常,最后一条命令将会打印 Spark 版本信息</p>
@@ -648,16 +648,16 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>复制 Spark 3.1.1 二进制包到 Docker 容器中</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <ol start="4">
 <li>登录到容器并确保 <code>SPARK_HOME2</code> 存在</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
-ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># 或者 mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
 <p>如果一切执行正常,最后一条命令将会打印 Spark 版本信息</p>
@@ -668,6 +668,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>检查任务日志是否包含输出 <code>Pi is roughly 3.146015</code></p>
 <h3>如何在 Master、Worker 和 Api 服务之间支持共享存储?</h3>
+<blockquote>
+<p><strong>注意</strong>: 如果是在单机上通过 docker-compose 部署,则步骤 1 和 2 可以直接跳过,并且执行命令如 <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> 将 Hadoop 放到容器中的共享目录 /opt/soft 下</p>
+</blockquote>
 <p>例如, Master、Worker 和 Api 服务可能同时使用 Hadoop</p>
 <ol>
 <li>修改 <code>docker-compose.yml</code> 文件中的 <code>dolphinscheduler-shared-local</code> 存储卷,以支持 nfs</li>
@@ -691,6 +694,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </li>
 </ol>
 <h3>如何支持本地文件存储而非 HDFS 和 S3?</h3>
+<blockquote>
+<p><strong>注意</strong>: 如果是在单机上通过 docker-compose 部署,则步骤 2 可以直接跳过</p>
+</blockquote>
 <ol>
 <li>修改 <code>config.env.sh</code> 文件中下面的环境变量:</li>
 </ol>
diff --git a/zh-cn/docs/1.3.6/user_doc/docker-deployment.json b/zh-cn/docs/1.3.6/user_doc/docker-deployment.json
index 7f0295d..7224eb3 100644
--- a/zh-cn/docs/1.3.6/user_doc/docker-deployment.json
+++ b/zh-cn/docs/1.3.6/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>快速试用 Docker 部署</h1>\n<h2>先决条件</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>如何使用 Docker 镜像</h2>\n<p>有 3 种方式可以快速试用 DolphinScheduler</p>\n<h3>一、以 docker-compose 的方式启动 DolphinScheduler (推荐)</h3>\n<p>这种方式需要先安装 <a href=\"https://docs.docker.com/compose/\">docker-compose</a>, docker-compose 的安装网上已经有非常多的资料,请自行安装即可</p>\n<p>对于 Windows 7-10,你可 [...]
+  "__html": "<h1>快速试用 Docker 部署</h1>\n<h2>先决条件</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>如何使用 Docker 镜像</h2>\n<p>有 3 种方式可以快速试用 DolphinScheduler</p>\n<h3>一、以 docker-compose 的方式启动 DolphinScheduler (推荐)</h3>\n<p>这种方式需要先安装 <a href=\"https://docs.docker.com/compose/\">docker-compose</a>, docker-compose 的安装网上已经有非常多的资料,请自行安装即可</p>\n<p>对于 Windows 7-10,你可 [...]
   "link": "/dist/zh-cn/docs/1.3.6/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file
diff --git a/zh-cn/docs/latest/user_doc/docker-deployment.html b/zh-cn/docs/latest/user_doc/docker-deployment.html
index 33330ea..78c0ad3 100644
--- a/zh-cn/docs/latest/user_doc/docker-deployment.html
+++ b/zh-cn/docs/latest/user_doc/docker-deployment.html
@@ -598,17 +598,17 @@ RUN apt-get update &amp;&amp; \
 <p>复制 Spark 2.4.7 二进制包到 Docker 容器中</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-2.4.7-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <p>因为存储卷 <code>dolphinscheduler-shared-local</code> 被挂载到 <code>/opt/soft</code>, 因此 <code>/opt/soft</code> 中的所有文件都不会丢失</p>
 <ol start="4">
 <li>登录到容器并确保 <code>SPARK_HOME2</code> 存在</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-2.4.7-bin-hadoop2.7.tgz
 rm -f spark-2.4.7-bin-hadoop2.7.tgz
-ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
+ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># 或者 mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
 <p>如果一切执行正常,最后一条命令将会打印 Spark 版本信息</p>
@@ -648,16 +648,16 @@ ln -s spark-2.4.7-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 <p>复制 Spark 3.1.1 二进制包到 Docker 容器中</p>
 </li>
 </ol>
-<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz dolphinscheduler-worker:/opt/soft
+<pre><code class="language-bash">docker cp spark-3.1.1-bin-hadoop2.7.tgz docker-swarm_dolphinscheduler-worker_1:/opt/soft
 </code></pre>
 <ol start="4">
 <li>登录到容器并确保 <code>SPARK_HOME2</code> 存在</li>
 </ol>
-<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it dolphinscheduler-worker bash
+<pre><code class="language-bash">docker <span class="hljs-built_in">exec</span> -it docker-swarm_dolphinscheduler-worker_1 bash
 <span class="hljs-built_in">cd</span> /opt/soft
 tar zxf spark-3.1.1-bin-hadoop2.7.tgz
 rm -f spark-3.1.1-bin-hadoop2.7.tgz
-ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</span>
+ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># 或者 mv</span>
 <span class="hljs-variable">$SPARK_HOME2</span>/bin/spark-submit --version
 </code></pre>
 <p>如果一切执行正常,最后一条命令将会打印 Spark 版本信息</p>
@@ -668,6 +668,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </code></pre>
 <p>检查任务日志是否包含输出 <code>Pi is roughly 3.146015</code></p>
 <h3>如何在 Master、Worker 和 Api 服务之间支持共享存储?</h3>
+<blockquote>
+<p><strong>注意</strong>: 如果是在单机上通过 docker-compose 部署,则步骤 1 和 2 可以直接跳过,并且执行命令如 <code>docker cp hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft</code> 将 Hadoop 放到容器中的共享目录 /opt/soft 下</p>
+</blockquote>
 <p>例如, Master、Worker 和 Api 服务可能同时使用 Hadoop</p>
 <ol>
 <li>修改 <code>docker-compose.yml</code> 文件中的 <code>dolphinscheduler-shared-local</code> 存储卷,以支持 nfs</li>
@@ -691,6 +694,9 @@ ln -s spark-3.1.1-bin-hadoop2.7 spark2 <span class="hljs-comment"># or just mv</
 </li>
 </ol>
 <h3>如何支持本地文件存储而非 HDFS 和 S3?</h3>
+<blockquote>
+<p><strong>注意</strong>: 如果是在单机上通过 docker-compose 部署,则步骤 2 可以直接跳过</p>
+</blockquote>
 <ol>
 <li>修改 <code>config.env.sh</code> 文件中下面的环境变量:</li>
 </ol>
diff --git a/zh-cn/docs/latest/user_doc/docker-deployment.json b/zh-cn/docs/latest/user_doc/docker-deployment.json
index 7f0295d..7224eb3 100644
--- a/zh-cn/docs/latest/user_doc/docker-deployment.json
+++ b/zh-cn/docs/latest/user_doc/docker-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "docker-deployment.md",
-  "__html": "<h1>快速试用 Docker 部署</h1>\n<h2>先决条件</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>如何使用 Docker 镜像</h2>\n<p>有 3 种方式可以快速试用 DolphinScheduler</p>\n<h3>一、以 docker-compose 的方式启动 DolphinScheduler (推荐)</h3>\n<p>这种方式需要先安装 <a href=\"https://docs.docker.com/compose/\">docker-compose</a>, docker-compose 的安装网上已经有非常多的资料,请自行安装即可</p>\n<p>对于 Windows 7-10,你可 [...]
+  "__html": "<h1>快速试用 Docker 部署</h1>\n<h2>先决条件</h2>\n<ul>\n<li><a href=\"https://docs.docker.com/engine/install/\">Docker</a> 1.13.1+</li>\n<li><a href=\"https://docs.docker.com/compose/\">Docker Compose</a> 1.11.0+</li>\n</ul>\n<h2>如何使用 Docker 镜像</h2>\n<p>有 3 种方式可以快速试用 DolphinScheduler</p>\n<h3>一、以 docker-compose 的方式启动 DolphinScheduler (推荐)</h3>\n<p>这种方式需要先安装 <a href=\"https://docs.docker.com/compose/\">docker-compose</a>, docker-compose 的安装网上已经有非常多的资料,请自行安装即可</p>\n<p>对于 Windows 7-10,你可 [...]
   "link": "/dist/zh-cn/docs/1.3.6/user_doc/docker-deployment.html",
   "meta": {}
 }
\ No newline at end of file